Back to blog

Your Are the Bottleneck Now

Marius Horatau
Written by
Marius Horatau
Published on
March 14, 2026

AI can now generate code, fixes, and explanations faster than most developers can properly evaluate them. That sounds like a productivity breakthrough. It may also be the fastest way to become a worse engineer without noticing.

Your Are the Bottleneck Now

There was a time when the bottleneck in software was the computer.

Compilation took time. Deployments took time. Provisioning took time. You could think faster than the machine could move, so the craft naturally grew around waiting: reading docs, tracing stack traces, staying with a bug long enough to feel its shape. You’d stare at the same few lines until they gave up their secret.

That world is mostly gone.

Today, the machine is rarely the bottleneck. The model isn’t either. The bottleneck is you: your attention, your working memory, your ability to hold a problem in your head long enough for it to become simple.

The work used to demand that you produce solutions, which forced you to understand the problem before you could move forward. Now it asks you to evaluate solutions generated by something else, at a pace that doesn’t leave much room for understanding. That shift looks like a productivity gain, but it’s a cognitive tradeoff in disguise. And you’re on the losing end.

The brain wasn’t built for infinite throughput

The brain does not think in an infinite stream. It has a small, fragile workspace. Neuroscientists call it working memory: the space where you actively hold and manipulate information long enough to reason with it. It can juggle roughly four things at once. Not four files. Not four services. Four chunks of anything. It’s the reason you can’t multiply two large numbers in your head but can do it on paper. The scratchpad is tiny on purpose.

Software engineering has always involved ambiguity, interruption, and context switching. The difference is that the old workflow still forced compression. You had to turn messy reality into an internal model of your own. Even when the work was fragmented, all those fragments still had to come together inside your head: logs, docs, code, runtime behaviour, and half-formed hypotheses all had to be compressed into something you understood.

AI changes that rhythm. Now the switching is often between candidate outputs rather than inputs that deepen your mental model. You are moving across contexts and you are repeatedly interrupting model formation to evaluate externally generated possibilities. That is a different cognitive act. It rewards throughput before understanding.

This is what a typical AI-assisted workflow asks your working memory to hold at once:

Each of these is competing for the same four slots.

When working memory overloads, your brain does what it always does under pressure: it takes shortcuts. Instead of reasoning through the problem, you start scanning for things that look right. The diff seems reasonable, the tests pass, good enough. Move on. AI coding feels like leverage. It is leverage. But it also creates a constant micro-choice: Do I slow down and understand this, or do I keep the throughput going?

Your brain learns the answer very quickly: keep moving.

The dopamine trap

I find AI genuinely addictive. But the addiction isn’t what worries me the most. The scary part is that AI makes it pleasant to not understand.

Every prompt-response cycle has the structure of a variable reward loop. You ask, you receive, you feel a small hit of progress. Sometimes the output is exactly what you needed. Sometimes it’s wrong. Sometimes it’s surprisingly good. The unpredictability keeps you engaged.

Neuroscientist Wolfram Schultz showed decades ago that dopamine fires most strongly in response to unexpected rewards [1] [2]. It’s what he called prediction error, the gap between what you expected and what you got. When you can’t predict the outcome, the loop becomes harder to resist. It’s the same principle behind slot machines, social media feeds, and TikTok’s scroll.

The prompt-response loop is a slot machine for your engineering brain:

Ask → receive → feel progress.

Even if the progress is cosmetic. Even if you couldn’t explain why the solution works, your brain registers it as forward motion. Something happened, it felt productive, so do it again. The gap between understanding and the feeling of understanding is easy to miss when the loop keeps moving.

And the more you pull the lever, the more your brain adapts to that pace, training you to expect constant micro-completions. Sitting with a hard problem with no reward signal and no sense of forward motion becomes progressively uncomfortable. You feel restless. You reach for the prompt. Not always because you need help, but because the silence itself starts to feel wrong.

Over time, your tolerance for the discomfort of not-knowing collapses. And not-knowing is where real engineering starts.

The friction was the feature

When the machine was the bottleneck, the friction served a purpose: it forced understanding. When compilation took five minutes, you spent those five minutes thinking about the problem. When there was no model to ask, you read the docs yourself, built a mental model, tested it against reality, and refined it. The slowness wasn’t a bug. It was training.

The brain is antifragile in the Nassim Taleb sense: it gets stronger from struggle and weaker from ease. Every time you stayed with a problem until it clicked, you made the next one easier to see. Pattern recognition, systems intuition, the ability to “feel” when something is off in a codebase, all of that came from repeated, effortful engagement with hard things.

AI removes the effort. And with it, the training. The friction that AI eliminates was doing two things at once: slowing you down and making you better. We kept the speed and lost the getting-better part without noticing, because the getting-better part was invisible.

It’s like the brakes on a car. Brakes do not make a fast car slower. They are what let it go safely fast. The same was true of a lot of the friction in software engineering. The waiting, the reading, the wrestling with the problem all felt slow, but they were building the control that made real speed possible.

Cognitive debt

There’s a concept every software engineer already understands: technical debt. You ship code you don’t fully understand because the deadline matters more than the architecture. It works today. It costs you tomorrow. And it compounds.

What’s happening with AI-assisted work is the cognitive equivalent.

Every time you accept model output without deeply understanding it, you take on cognitive debt. You ship understanding you don’t have. The code works, the PR gets merged, the feature ships. But you didn’t build the mental model. You don’t know why it works. You can’t predict how it fails.

Technical debt means your codebase becomes harder to change. Cognitive debt means you become harder to trust.

And like technical debt, cognitive debt compounds. The next time you hit a related problem, you do not have the foundation to judge the model’s output properly. So you accept it again. The debt grows, your ability to evaluate shrinks, and the cycle gets tighter.

Eventually you end up in a state every engineer recognizes from a neglected codebase: everything looks vaguely reasonable, but nobody can tell what is actually good anymore. The output sounds right, the code looks fine, and nothing seems obviously broken. But that is exactly what makes the situation dangerous. You cannot really verify any of it, because you never built the understanding that verification depends on.

In a codebase, this is when you stop refactoring and start rewriting. In your own mind, there is no equivalent of a rewrite. There is only the slow, uncomfortable work of rebuilding understanding from scratch. And that is exactly the kind of work this new loop trains you to avoid.

What this costs you

What AI erodes is the exact thing that makes engineers valuable.

In The Illusion of Building, I argued that writing code was never the hard part of software engineering. The hard part was understanding systems: why things break, how components interact under pressure, what assumptions are load-bearing, where the entropy hides. That understanding came from years of effortful engagement with real problems.

AI reduces the amount of time you spend sitting with the problem yourself. “I stayed with this until I understood it” turns into “I prompted until something worked.” The output may look the same. The learning does not. You are genuinely more productive with AI in short term, while simultaneously becoming a worse engineer.

Betting against understanding

Everyone is holding their breath right now.

AI labs are marketing a future where code gets fully automated. Understanding will be optional. The cognitive tradeoff we’re all feeling? A short-term cost on the way to a world where none of this matters because the models will handle it.

But everyone who works with AI on hard enough problems knows how far we are from that world. The models are getting better rapidly, but they are still remarkably unreliable. The hallucinations are real. The confident wrongness is real. The moment you step outside well-trodden patterns, you’re back to needing exactly the kind of understanding that the tool is eroding.

So developers are stuck. Giving up depth because they’ve been told it won’t matter soon, while experiencing daily that it still very much does. The rationalization, “why invest in understanding if it’ll be automated in 18 months?”, makes the cognitive debt feel acceptable. But 18 months keeps moving. And the debt keeps compounding.

Don’t throw away your oars just because you think you can see the shore. Especially when the shore keeps moving.

What to actually do about this

I want to be honest here: there’s no clean answer.

You can’t just slow down. Your company doesn’t care about your cognitive health. It cares about velocity, and the person next to you is prompting through tickets at twice your pace. “I’m being deliberate about my learning” is not a thing you can say in a standup.

And “stop using AI” is luddism. Some AI gains are real, especially in the obvious places: boilerplate, scaffolding, familiar syntax, patterns you have already solved a hundred times. That is not where your advantage is. Offloading that work is just leverage.

So the question isn’t whether to use AI. It’s how to use it without losing the thing that makes you good.

Here is the mental model I keep coming back to: understanding does not slow you down, it is what makes speed sustainable. The developer who understands the system uses AI better because they know what to ask, what to reject, and what correct looks like. For a while, someone without that understanding can look just as productive. But that only holds while the work stays close to familiar patterns. Once the problem gets subtle, the system behaves strangely, or the generated answer stops being enough, the gap opens up fast.

That is the part worth protecting. Code is not really yours just because it ships under your name. It is yours when you can explain it, change it, and debug it under pressure.

And that is why “just outsource more” is not a neutral decision. Some productivity losses are reversible. Some are not. If you skip writing tests for a week, you can recover. If you spend years training yourself out of patience, out of deep reading, out of staying with confusion long enough for understanding to form, that is much slower to rebuild. There is no quick reset for attention.

So I do not think the answer is to reject the tools. I think the answer is to be more careful about what you let them replace. Outsourcing repetition is one thing. Outsourcing understanding is another.

The pressure to move fast is real. The incentive to skim is real. The tools are built to keep that loop going. But the alternative is drifting into a way of working where you can no longer function without the tool and can no longer properly judge what it gives you. That’s just dependency disguised as speed.

Your attention is finite. Your working memory is small. Your brain adapts to whatever you repeatedly ask it to do. Treat them as constraints to respect, not limitations to optimize away.

Because the moment you stop being careful about what you outsource, it’s not your workflow that changes. It’s you.

Next up

The Illusion of Building

AI makes it dramatically cheaper to produce software that appears to work. But 'building an app' and 'engineering a system' are two different activities that people keep confusing, and the gap between them is where most of the actual work lives.

© 2026 Uphack.io ✦ Theme inspired by Aria

Theme