Security is having a moment: new tools can read code, find bugs, and even suggest fixes. Some people took that to mean cybersecurity is basically over. This is my take from the inside: what’s actually changing, what isn’t, and why security was never just a code problem.
There’s a meme that Charity Majors helped make famous: “I don’t always test my code, but when I do, I test in production.”
It started as a joke, but it became one of the most important arguments in modern software engineering. Her point wasn’t that pre-production testing is worthless. It was that staging environments are a pale imitation of reality, and that once you deploy, you’re not testing code anymore. You’re testing a system:
“Every deploy, after all, is a unique and never-to-be-replicated combination of artifact, environment, infra, and time of day. By the time you’ve tested, it has changed. Once you deploy, you aren’t testing code anymore, you’re testing systems — complex systems made up of users, code, environment, infrastructure, and a point in time. These systems have unpredictable interactions, lack any sane ordering, and develop emergent properties which perpetually and eternally defy your ability to deterministically test.”
— Charity Majors - I test in prod
She was talking about software reliability. But she might as well have been describing security. I kept thinking about this framing recently, especially as new security tools show up. It’s a useful mental model for how I think about security and where it’s heading.
Last week, Anthropic announced Claude Code Security, an AI-powered tool that scans codebases for vulnerabilities and suggests patches.
The reaction to this announcement was swift and dramatic, and the internet has decided that Anthropic’s Claude Code Security is the end of cybersecurity. Cybersecurity stocks sold off hard the same day. CrowdStrike fell nearly 8%. Okta dropped over 9%. Cloudflare, Zscaler, Palo Alto Networks, all took hits. LinkedIn filled with hot takes. The consensus among a certain crowd: pentesting is dead, AppSec is automated, the industry is over.
But here’s what the reaction gets wrong: confusing code scanning with security.
The map is not the territory
There’s a mental model from philosophy that Farnam Street popularized in the context of clear thinking: the map is not the territory. The idea, originally from Alfred Korzybski, is that any representation of a thing is fundamentally different from the thing itself. A map of a city tells you about streets and distances. It doesn’t tell you that a road is under construction today, that a neighborhood feels different at night, or that a shortcut floods in the rain. Reality is always richer, messier, and more surprising than any description of it.
Source code is a map.
It is a very detailed map, sometimes beautifully drawn. It has types and functions and tests and comments and commit history. But it still isn’t the thing your customers use. Your customers use the running system: the service mesh, the identity provider, the caches, the queues, the feature flags, the WAF rules someone added during an incident and never removed, the “temporary” fallback that became permanent, the dependency that gets replaced by a sidecar at runtime, the load balancer that rewrites headers in a way nobody remembers.
That system is the territory.
If you have ever been on call for a distributed system, you already know this in your bones. The bug was never “in the code.” The bug was in the interaction between three correct components under a specific kind of load on a Tuesday.
Security vulnerabilities at scale often look exactly like that.
What security looks like when it’s real
A lot of vulnerability scanning, historically, has trained people to treat security like a static property of code. Find bad pattern, fix bad pattern, ship good code.
That is useful. It is not the job though.
I’ve spent over a decade in this industry. For a significant portion of that time, my job has involved testing the security of large, complex systems. When I go looking for serious issues, I rarely begin with a repository. I begin with questions that repositories cannot answer:
- What does “authenticated” mean here, end to end? Not “there’s a middleware.” I mean: which identity is asserted, where, and how many times it is reinterpreted on the way to a database row.
- Where does trust enter the system? Which headers are treated as truth? Which service is allowed to mint claims? Which network boundary is assumed to exist?
- What happens when the system is stressed? When retries pile up, when timeouts cascade, when queues back up, when one region becomes “read-only” in a way nobody designed for.
Those questions are not answered by static analysis because the answers are often not written down in any single place. They live in the seams: between components, between teams, between “how it should work” and “how it works right now.”
Some examples of “territory bugs” that code can be perfectly proud of while the system stays unsafe:
- A service that correctly validates JWTs, but accepts tokens from an issuer that is only meant for a different audience because an internal gateway normalizes claims in a surprising way.
- A permission check that is correct in one service, bypassed in another because someone added an internal endpoint for a backfill job and it quietly became part of the product surface.
- A race condition that only appears when two requests hit two replicas behind a load balancer and a cache eviction lands between them in just the wrong order.
- A data exposure that is “working as designed” because the design assumed the caller was always another internal service, and then a networking change made “internal” reachable in a new way.
These are not exotic. They are the normal failure modes of large systems built by humans under time pressure. They are what happens when software becomes a supply chain of assumptions.
This is why the best security engineers I know spend as much time studying architecture, traffic flows, org incentives, and operational reality as they do reading code.
What Claude Code Security changes (and why it matters)
Let’s take the best case scenario, the one in which Claude Code Security turns out to be outstanding at identifying issues in the code. But even in this optimistic scenario, it is a real leap in one specific category: code understanding applied to vulnerability discovery and remediation. If it can reliably reason across a codebase, trace data flows, and propose patches that survive human review, that’s a meaningful improvement over a lot of “lint-with-anxiety” tooling we’ve endured for years.
That matters for two reasons:
- It lowers the floor. Teams that never had the budget or expertise for deep review get something far better than nothing.
- It shifts attention. If we can spend less human time on the obvious footguns, we can spend more time on the weird, systemic failures that only show up in complex deployments.
So yes, this could be disruptive, especially to vendors whose core value proposition is “we scan your code for known badness.” If your product is a better grep, reasoning models are not good news.
But it does not eliminate the need for security any more than better compilers eliminated the need for performance engineering.
It changes where the work is.
Why “cybersecurity is over” is a comforting illusion
The market reaction made for good screenshots. Several well-known security stocks dropped hard on the day the story hit the mainstream financial press, with commentary framing Claude Code Security as a potential disintermediator.
That reaction is understandable if you collapse “security” into a single bucket called “finding vulnerabilities.” But modern security is a bundle of different jobs operating at different layers:
- Preventing whole classes of bugs from shipping (secure-by-default libraries, paved roads, guardrails).
- Detecting abuse in real time (identity anomalies, endpoint telemetry, network behavior).
- Responding to incidents (containment, eradication, recovery, forensics).
- Designing systems that fail safely (authorization models, isolation, blast radius).
- Managing the human reality (access reviews, change management, incentives, training).
Claude Code Security lives primarily in one lane: pre-deploy code review. It reads the map.
A lot of the industry sells tools for the territory: identity, endpoints, networks, runtime detection, response. Those are not replaced by better static reasoning, because attackers are not static either.
If anything, there’s a darker possibility: as AI makes it cheaper to find and exploit code-level flaws, pressure increases on runtime defenses and resilient system design. The territory gets noisier. The roads flood more often. You want better monitoring, better containment, and better recovery.
So the take “security is over” is not just wrong. It’s backwards.
The philosophical point: security is an argument about reality
There’s a version of security that people want to believe in. It goes like this:
- The code is the truth.
- The truth can be analyzed.
- The analysis can be automated.
- Therefore, the problem can be solved.
It’s a deeply modern fantasy: that enough intelligence applied to the artifact will tame the system.
But security has always been less like mathematics and more like ecology.
You do not secure a forest by proving trees correct. You secure it by understanding incentives, climate, predators, invasive species, and what happens when something catches fire.
A reasoning model that reads code is impressive. It is also, unavoidably, an abstraction-layer tool.
And the hard parts of security happen at the abstraction boundaries:
- where identity becomes authorization,
- where “internal” becomes reachable,
- where retries become amplification,
- where “temporary” becomes permanent,
- where humans adapt to guardrails.
Security is not a code problem because insecurity is rarely just “a bug.” It’s usually a relationship: between services, between assumptions, between teams, between what the system says and what the system does.
What to do with this, if you build software
If you’re leading or building, here’s the non-doomer take:
- Use these tools aggressively for what they’re good at. Let them chew through your codebase. Let them propose patches. Treat them like a powerful assistant who never gets bored.
- Invest in the territory. Instrumentation, logging, identity rigor, authorization consistency, blast radius reduction, sane defaults, safe deployment patterns.
- Push security “left” and “right.” Left is code review. Right is runtime reality. The mature posture is both.
- Assume attackers get the same tools. If defenders can reason across your codebase, attackers can too. The differentiator becomes response speed and systemic resilience, not just “did we miss a bug.”
Claude Code Security is not the end of cybersecurity.
It’s a reminder that we’ve spent a long time arguing about the map because it was the only thing we could cheaply examine. Now the map is easier to read. Good.
Now we have fewer excuses to ignore the territory.