AI makes it dramatically cheaper to produce software that appears to work. But 'building an app' and 'engineering a system' are two different activities that people keep confusing, and the gap between them is where most of the actual work lives.
Every few days, a post goes viral. “I built a mobile app with no coding experience.” “I cloned Spotify in a weekend.” “As a non-developer, I just shipped my first product.”
These posts aren’t lying. The apps exist. They run, they have screens and buttons and databases. Something was clearly made.
But I keep noticing something about the language. It’s always “I built.” Never “I’m running.” Never “I’m operating.” The celebration is always at the moment of creation, because that’s where the story ends. The app is born, the screenshot is taken, the post is published. Roll credits.
There’s a video that went viral a while back that captures this perfectly: a group of Vietnamese content creators building a Bugatti Chiron out of clay. The end result looks like a Bugatti. From the outside, in a photo, you might not immediately tell the difference.
It is, of course, not a Bugatti. The thing that makes a Bugatti a Bugatti, the engineering that lets it do 260 mph without killing you, is entirely absent. What exists is a shape.
A lot of software being celebrated right now is clay Bugattis.
This is what I’d call the illusion of building: the belief that producing software that appears to work is the same as engineering software that actually works. AI has made the first activity dramatically cheaper. It has not made the second one any less hard. And the distance between the two is where most of software engineering actually lives.
The two pages
Google Search has two pages. A text input. A button. A list of results.

Even before large language models existed, anyone with a web development book and a free weekend could have made something that looks exactly like it. A search box, a submit handler, a results page. The clay Bugatti of software. You could screenshot it, and from the outside, it would be indistinguishable from the real thing.
Google employs tens of thousands of engineers for those two pages.
This is the interface fallacy: confusing what a product looks like with what a product is. The interface is the thinnest layer of the system. It is what users see. It is almost nothing of what they rely on.
So what are those Google engineers actually doing?
- Relevance and ranking. Making sure the results are useful, and continuously improving them as queries, content, and expectations evolve.
- Latency. Returning results in hundreds of milliseconds, from anywhere on the planet, for billions of queries a day.
- Scale. Indexing the web. Crawling, storing, updating, and reprocessing at a pace that doesn’t fall behind reality.
- Reliability. Keeping it running. Handling failures invisibly. Designing systems where incidents are survivable.
- Abuse. Spam, fraud, scraping, adversarial SEO, legal takedowns. An entire adversarial ecosystem working to corrupt the product, every day.
- Security. Protecting billions of users, petabytes of data, and infrastructure that is itself a high-value target.
- Privacy and compliance. Data handling guarantees, regulatory requirements across jurisdictions, audit trails, retention policies.
- Cost. Doing all of the above within an economic envelope that sustains the business.
- Change. Every dependency, platform, browser, device, regulation, and user expectation shifts over time. The system adapts or it dies.
The engineering headcount isn’t building the two pages. It’s keeping “simple” feeling simple.
Fighting entropy
There is a concept from thermodynamics that clarifies what’s actually happening: entropy. The tendency of systems to move toward disorder. Maintaining order requires continuous energy. A house left unattended doesn’t improve. Paint peels, pipes corrode, foundations shift. The Second Law is indifferent to how well the house was built.
Software obeys a version of this law. Code rots. Dependencies ship breaking changes. User expectations evolve. Attackers find new angles. Scale reveals failure modes that didn’t exist at smaller numbers. Regulations change. Team members leave and take context with them. The infrastructure underneath the application moves on its own schedule, whether you’re ready or not.
Writing code is creating order. Software engineering is fighting entropy.
This distinction is the thing the “I built an app” narrative misses entirely. The moment of creation is the moment of maximum order and zero entropy. Every day after that, entropy increases. The system requires attention and operational investments to stay functional.
A fresh app with no users, no adversaries, no compliance surface, and no uptime requirements exists in a vacuum. It faces no entropy. And so it feels complete.
A system serving real users in the real world faces entropy on every axis, simultaneously, forever.
That’s why Google needs thousands of engineers. Not to build the two pages, but to hold back entropy at planetary scale.
From code to institution
A useful way to think about the stages of software maturity:
Code → Prototype → Product → Service → Institution
Code is instructions that execute. AI is exceptionally good at producing code.
Prototype is something that works on your machine, in your demo, under conditions you control. AI gets you here quickly, sometimes in minutes. This is where most “I built an app” stories end.
Product is coherent user value. Edge cases handled, error states considered, a UX that doesn’t just work in the demo but works when a real person with real data and real expectations tries to use it for something that matters to them. AI helps here. It also starts to show its limits.
Service is reliable operation at scale: monitoring, incident response, security, performance under real load, deployment pipelines, observability, cost management, and everything required to keep a product running when you’re not watching it. AI has almost nothing to say about this.
Institution is durable advantage: brand, trust, partnerships, distribution, data, compliance posture, organizational learning, reputation. This is accumulated over years and cannot be generated.
AI accelerates Code to Prototype. It sometimes reaches Product. Most “thousand-engineer” teams live in Service and Institution, the layers where entropy is highest and the work is least visible.
What AI actually changes
Intellectual honesty matters here. AI isn’t nothing. Pretending otherwise is cope.
AI genuinely compresses the cost of boilerplate. CRUD scaffolds, form handling, API wiring, data transformations, these used to take days and now take minutes. It makes prototyping dramatically faster: getting an idea to a visible, clickable state within hours instead of weeks. It makes personal tooling practical — the script you’d never have bothered writing because the effort wasn’t worth it. Now the effort is nearly zero. It helps you learn. Exploring unfamiliar codebases, understanding framework patterns, getting unstuck on syntax.
These are real gains. For personal tools, local automations, internal utilities, and MVPs, this is a genuine shift. The required effort for making useful software dropped through the floor. That matters.
But the things that make a product real (distribution, trust, reliability, operational maturity, security, compliance, domain expertise) are not implementation problems. They’re accumulated through real usage, under real constraints, over real time. They cannot be generated in a weekend, because they don’t come from code. They come from sustained exposure to reality.
Implementation and the cost of producing the artifact got cheaper. Outcomes didn’t. AI commoditizes implementation faster than it commoditizes outcomes.
Why “X is dead because I cloned it” is wrong
This is the interface fallacy applied to competition. If I can recreate the screens, I can replace the product.
But the screens were never the product.
What makes a product defensible isn’t its feature list. It’s the accumulated weight of everything behind the feature list: the data, the integrations, the reliability track record, the distribution that puts it in front of users, the trust that makes them stay, the compliance posture that lets it operate in regulated environments, the organizational learning from years of incidents, edge cases, and user feedback.
You can clone Notion’s interface. You cannot clone the integrations ecosystem, the team that has operated it at scale while learning from every failure, or the distribution that puts it in front of millions of knowledge workers. A clone starts at maximum order and zero operational experience. The original has been fighting entropy for years and surviving. That’s the moat.
This is not to say established businesses are invincible. They aren’t. But they don’t get killed by clones. They get killed by products that solve the problem differently, or better, or for a different audience. They get killed by better execution, not by cheaper reproduction.
The pattern this follows
AI is not the first technology to make production cheaper. Every significant tooling advance follows the same arc.
Desktop publishing made it cheap to produce documents that look professionally typeset. It didn’t produce more graphic designers. It produced more bad typography. The premium on good design increased, because now everyone could see the difference between “looks like a brochure” and “communicates effectively.”
Stock photography made images cheap. It didn’t replace photographers. It replaced bad photographs. The premium on distinctive, intentional visual work went up.
Website builders made it trivial to put something on the internet that looks like a website. They didn’t replace web development. They replaced the simplest tier of it, and revealed how much more there was beyond “a page that loads.”
In every case, the accessible layer got commoditized. In every case, the premium shifted to the harder, less visible work that was always there but previously bundled together with the easy parts.
AI is doing this to software, at a much larger scale and faster pace. It commoditizes the typing code. The thinking becomes more visible, more valuable, and harder to fake.
What this actually means
If you build software for a living, here’s my perspective:
The floor dropped. Things that used to require a developer such as personal tools, simple automations, internal dashboards, quick data transformations, often don’t anymore. This is genuinely good. It frees engineering time for problems that actually need engineering.
The ceiling didn’t move. System design, operational reliability, security, performance at scale, product judgment, and organizational coordination are exactly as hard as they were. They might be harder, because cheaper code production means more systems exist in the world, which means more surface area for entropy, more things to integrate, more complexity to manage.
The middle is compressing. The work that was “harder than a script, easier than a system”, a lot of straightforward application development, is where the most disruption happens. Not because it disappears, but because the time and cost to reach a working version drops significantly.
That means there are about to be a lot more clay Bugattis in the world.
This isn’t a reason to dismiss the excitement though. AI making software accessible to more people is a genuine good. The clay Bugatti is real craftsmanship. Building something that works even as a prototype, even under ideal conditions, is not nothing.
But the illusion was never that the clay is bad. The illusion of building is that it looks so much like the real thing that people forget the difference.
And as AI improves, the clay only gets better. The prototypes become more polished, the demos more convincing, the gap between “looks like a product” and “is a product” harder to spot from the outside. The gap doesn’t shrink. It just becomes harder to see.
Everyone is asking whether AI will replace software engineers. That misses the point. The question is what happens when everyone can build the shape, but far fewer can make it real.