You don't 3D print a house. You print your tools.
Vibe coding is to engineering what 3D printing is to making, and that’s exactly why it matters.
There’s a recurring debate in our industry about whether vibe coding will replace serious software engineering. Both camps are framing the question wrong. The right reference is the desktop 3D printer.
When 3D printing went mainstream a decade ago, the same two camps showed up. One predicted print-on-demand cars and houses; the other dismissed it as a toy. What happened was stranger. 3D printing didn’t replace manufacturing. It collapsed the cost of bespoke tooling. A specific bracket for a specific shelf in a specific corner of your specific workshop used to be a project. Now it’s a Sunday afternoon. Nobody prints a load-bearing wall. Everyone prints jigs, fixtures, replacement knobs, and tools tailored to the exact job at hand.
That’s the mental model for vibe coding.
The analogy, precisely
Production systems engineering still works the way it did. Distributed transactions, security boundaries, multi-region failover, and regulatory-grade audit trails- none of that becomes “vibe-able.” The constraints are the same: correctness, throughput, observability, blast radius. If anything, the bar has gone up because the cost of writing plausible-looking wrong code has dropped to zero.
But there’s a category of software that used to sit in a no-man’s-land: too specific to be worth packaging as a product, too tedious to write by hand for a single use, too critical to skip entirely. Internal scripts that should have safety rails. CLIs you run twice a year. Migration tools tied to the exact shape of your stack. These were the software brackets and jigs, necessary, valuable, and almost always either skipped or built poorly because nobody had the budget for them.
That’s where vibe coding shines. It’s a workshop tool that brings industrial-grade results to one-off problems. The cost of bespoke, well-built tooling has collapsed to the point where it makes economic sense to build it for a single use.
The instance: a Route 53 migration
Last week, I needed to move a Route 53-hosted zone from one AWS account to another. Standard enterprise hygiene, wrong account ownership, billing consolidation, the usual story. The problem itself is straightforward if you know Route 53: you can’t transfer a hosted zone directly between accounts. You list the records in the source, create a new zone in the destination, replay the records into it, then cut over the registrar’s NS delegation.
Each step has small traps. The apex NS records and the SOA record are auto-generated by AWS and will be rejected on import. Pagination on ListResourceRecordSets uses a three-field cursor: name, type, and set identifier, not a simple token. The ChangeResourceRecordSets API has a hard cap of 1000 changes per call, but it gives much better error messages if you batch smaller changes. Private zones require VPC re-association and are a separate problem. None of these is hard. They’re just sharp edges that someone running this once is statistically guaranteed to hit.
Pre-2023, my options were three. Run it manually through the console, slow, fat-finger-prone, no audit trail. Write a one-off Bash script with AWS CLI calls, faster, but every safety check I want to add is another hour. Build a proper internal tool, justified for a team running this monthly, hard to justify for a one-time job.
The 3D-printer-for-tools answer is option four: build the proper tool anyway, because building it is no longer expensive.
Spec first, then vibe
This is the part people get wrong about vibe coding. Because the model writes fast, vague specs produce a lot of confidently wrong code very quickly. The discipline shifts from typing to specifying.
The dialogue that produced this tool started with the problem, and implementation followed. I described what I was trying to do: move a hosted zone safely between accounts, with the registrar transfer as a separate concern. The conversation forced a series of decisions before any code existed.
Scope: public hosted zones only. Private zones are moved to v2 because cross-account VPC association is a different problem with different failure modes, and conflating them in v1 dilutes the design.
Trust model: never mutate anything until the operator has confirmed which account they’re talking to. STS GetCallerIdentity runs on both source and destination credentials at startup; the account IDs and caller ARNs are shown in plain text, and the operator confirms before the tool proceeds.
Credential surface: named AWS profiles and environment variables, nothing else. No baked-in keys, no custom config files. The credential chain is the SDK’s; the tool just picks where to source from.
Reversibility: the tool stops short of the irreversible step. It replicates the zone, records it, then prints the new name servers and stops. Updating the registrar’s NS delegation is a manual final step, deliberately, because that’s the cutover moment, and a human should be the one who pulls that lever.
Failure modes: which records get skipped (apex NS, SOA), what batch size to use (100, not 1000, clearer error messages outweigh the marginal call count), how pagination is handled (full marker-based loops, not “first page is probably fine”).
These decisions were made in prose, before any TypeScript existed. The model is excellent at translating that prose into code; it is much less reliable at making these decisions for you. Spec-driven vibe coding means the operator writes the spec, the model writes the code, and the operator reviews both for fidelity.
Route53 Migration Tool
The result is a CLI called route53-aws-to-aws-transfer, written in TypeScript with the AWS SDK v3. The structure mirrors the spec. A credentials module resolves either a named profile or environment variables and runs an STS identity check, returning the validated account ID and caller ARN so the CLI layer can show them to the operator for confirmation. Two independent credential resolutions happen, one for the source account, one for the destination, because conflating them is the most likely operator error and the easiest to prevent at the boundary.
A Route53 module wraps the SDK calls the migration actually needs: paginated zone listing filtered to public zones, paginated record-set listing with the three-field cursor that the API requires and the SDK doesn’t abstract, zone creation with a unique caller reference, and the change-set builder that explicitly drops apex NS and SOA records before batching the rest into UPSERT calls.
An orchestration module sequences these against the operator’s confirmed inputs and emits structured progress. The CLI layer uses @inquirer/prompts for the interactive flow, chalk for the highlighting that draws the eye to account IDs and name-server lists, and ora for the spinners that make long pagination loops feel like progress rather than a hang.
The whole tool is around 400 lines of TypeScript. It does one thing. It does it with the safety rails I’d expect from an internal platform team’s tooling. It will probably run three times in its life, and that’s fine, because the cost of building it correctly was lower than the cost of running the migration carelessly even once.
What this means if you’re running engineering
The economic shift this represents is small in the aggregate but large in the aggregate. The list of things that were previously “not worth building properly” is enormous: data migration scripts, one-off ETL jobs, internal admin CLIs, environment-bootstrap tools, audit-report generators, ad-hoc dashboards, throwaway integrations between two SaaS products you happen to use. Every team has dozens. Most are currently either absent, and the work is being done by hand, or present in a form that’s basically a liability: Bash, no tests, no logging, run from someone’s laptop.
If you’re a CTO or a tech lead, the practical question is what quality bar you hold for built-once tooling. My answer for my own team is the same as our production bar, minus the scale concerns. STS validation, structured error handling, idempotency where the underlying API allows it, and no silent failures. The model can hold that bar if you specify it. It absolutely won’t hold if you don’t.
This is also where vibe coding stops being a private hobby and starts being a team practice. The artifacts are small enough to review properly. The specs are short enough to write down. The economics work even for tooling a single engineer will use once, which means there’s no longer an excuse for the un-toolable middle. That category just collapsed.
Where to go from here?
What desktop 3D printing did for the workshop, vibe coding is doing for software. Production engineering is still production engineering, with all the disciplines that it requires. What returns is a layer we lost when software industrialized: the ability to make exactly the tool you need, for exactly the job in front of you, at a quality level you would have respected even from a professional. Vibe-code the Route 53 migration tool you needed this morning. The alternative is doing the migration without it. The workshop is back. Print your tools.


