Back to blog

Security Is Not a Code Problem

Marius Horatau
Written by
Marius Horatau
Published on
February 22, 2026

Security is having a moment: new tools can read code, find bugs, and even suggest fixes. Some people took that to mean cybersecurity is basically over. This is my take from the inside: what’s actually changing, what isn’t, and why security was never just a code problem.

Security Is Not a Code Problem

There’s a meme that Charity Majors helped make famous: “I don’t always test my code, but when I do, I test in production.”

It started as a joke, but it became one of the most important arguments in modern software engineering. Her point wasn’t that pre-production testing is worthless. It was that staging environments are a pale imitation of reality, and that once you deploy, you’re not testing code anymore. You’re testing a system:

“Every deploy, after all, is a unique and never-to-be-replicated combination of artifact, environment, infra, and time of day. By the time you’ve tested, it has changed. Once you deploy, you aren’t testing code anymore, you’re testing systems — complex systems made up of users, code, environment, infrastructure, and a point in time. These systems have unpredictable interactions, lack any sane ordering, and develop emergent properties which perpetually and eternally defy your ability to deterministically test.”

— Charity Majors - I test in prod

She was talking about software reliability. But she might as well have been describing security. I kept thinking about this framing recently, especially as new security tools show up. It’s a useful mental model for how I think about security and where it’s heading.

Last week, Anthropic announced Claude Code Security, an AI-powered tool that scans codebases for vulnerabilities and suggests patches.

The reaction to this announcement was swift and dramatic, and the internet has decided that Anthropic’s Claude Code Security is the end of cybersecurity. Cybersecurity stocks sold off hard the same day. CrowdStrike fell nearly 8%. Okta dropped over 9%. Cloudflare, Zscaler, Palo Alto Networks, all took hits. LinkedIn filled with hot takes. The consensus among a certain crowd: pentesting is dead, AppSec is automated, the industry is over.

But here’s what the reaction gets wrong: confusing code scanning with security.

The map is not the territory

There’s a mental model from philosophy that Farnam Street popularized in the context of clear thinking: the map is not the territory. The idea, originally from Alfred Korzybski, is that any representation of a thing is fundamentally different from the thing itself. A map of a city tells you about streets and distances. It doesn’t tell you that a road is under construction today, that a neighborhood feels different at night, or that a shortcut floods in the rain. Reality is always richer, messier, and more surprising than any description of it.

Source code is a map.

It is a very detailed map, sometimes beautifully drawn. It has types and functions and tests and comments and commit history. But it still isn’t the thing your customers use. Your customers use the running system: the service mesh, the identity provider, the caches, the queues, the feature flags, the WAF rules someone added during an incident and never removed, the “temporary” fallback that became permanent, the dependency that gets replaced by a sidecar at runtime, the load balancer that rewrites headers in a way nobody remembers.

That system is the territory.

If you have ever been on call for a distributed system, you already know this in your bones. The bug was never “in the code.” The bug was in the interaction between three correct components under a specific kind of load on a Tuesday.

Security vulnerabilities at scale often look exactly like that.

What security looks like when it’s real

A lot of vulnerability scanning, historically, has trained people to treat security like a static property of code. Find bad pattern, fix bad pattern, ship good code.

That is useful. It is not the job though.

I’ve spent over a decade in this industry. For a significant portion of that time, my job has involved testing the security of large, complex systems. When I go looking for serious issues, I rarely begin with a repository. I begin with questions that repositories cannot answer:

Those questions are not answered by static analysis because the answers are often not written down in any single place. They live in the seams: between components, between teams, between “how it should work” and “how it works right now.”

Some examples of “territory bugs” that code can be perfectly proud of while the system stays unsafe:

These are not exotic. They are the normal failure modes of large systems built by humans under time pressure. They are what happens when software becomes a supply chain of assumptions.

This is why the best security engineers I know spend as much time studying architecture, traffic flows, org incentives, and operational reality as they do reading code.

What Claude Code Security changes (and why it matters)

Let’s take the best case scenario, the one in which Claude Code Security turns out to be outstanding at identifying issues in the code. But even in this optimistic scenario, it is a real leap in one specific category: code understanding applied to vulnerability discovery and remediation. If it can reliably reason across a codebase, trace data flows, and propose patches that survive human review, that’s a meaningful improvement over a lot of “lint-with-anxiety” tooling we’ve endured for years.

That matters for two reasons:

  1. It lowers the floor. Teams that never had the budget or expertise for deep review get something far better than nothing.
  2. It shifts attention. If we can spend less human time on the obvious footguns, we can spend more time on the weird, systemic failures that only show up in complex deployments.

So yes, this could be disruptive, especially to vendors whose core value proposition is “we scan your code for known badness.” If your product is a better grep, reasoning models are not good news.

But it does not eliminate the need for security any more than better compilers eliminated the need for performance engineering.

It changes where the work is.

Why “cybersecurity is over” is a comforting illusion

The market reaction made for good screenshots. Several well-known security stocks dropped hard on the day the story hit the mainstream financial press, with commentary framing Claude Code Security as a potential disintermediator.

That reaction is understandable if you collapse “security” into a single bucket called “finding vulnerabilities.” But modern security is a bundle of different jobs operating at different layers:

Claude Code Security lives primarily in one lane: pre-deploy code review. It reads the map.

A lot of the industry sells tools for the territory: identity, endpoints, networks, runtime detection, response. Those are not replaced by better static reasoning, because attackers are not static either.

If anything, there’s a darker possibility: as AI makes it cheaper to find and exploit code-level flaws, pressure increases on runtime defenses and resilient system design. The territory gets noisier. The roads flood more often. You want better monitoring, better containment, and better recovery.

So the take “security is over” is not just wrong. It’s backwards.

The philosophical point: security is an argument about reality

There’s a version of security that people want to believe in. It goes like this:

It’s a deeply modern fantasy: that enough intelligence applied to the artifact will tame the system.

But security has always been less like mathematics and more like ecology.

You do not secure a forest by proving trees correct. You secure it by understanding incentives, climate, predators, invasive species, and what happens when something catches fire.

A reasoning model that reads code is impressive. It is also, unavoidably, an abstraction-layer tool.

And the hard parts of security happen at the abstraction boundaries:

Security is not a code problem because insecurity is rarely just “a bug.” It’s usually a relationship: between services, between assumptions, between teams, between what the system says and what the system does.

What to do with this, if you build software

If you’re leading or building, here’s the non-doomer take:

  1. Use these tools aggressively for what they’re good at. Let them chew through your codebase. Let them propose patches. Treat them like a powerful assistant who never gets bored.
  2. Invest in the territory. Instrumentation, logging, identity rigor, authorization consistency, blast radius reduction, sane defaults, safe deployment patterns.
  3. Push security “left” and “right.” Left is code review. Right is runtime reality. The mature posture is both.
  4. Assume attackers get the same tools. If defenders can reason across your codebase, attackers can too. The differentiator becomes response speed and systemic resilience, not just “did we miss a bug.”

Claude Code Security is not the end of cybersecurity.

It’s a reminder that we’ve spent a long time arguing about the map because it was the only thing we could cheaply examine. Now the map is easier to read. Good.

Now we have fewer excuses to ignore the territory.

© 2026 Uphack.io ✦ Theme inspired by Aria

Theme