agents handle execution, the interesting work lives one level above.
This is a thought that’s been floating around in my head for a while, and since cybersecurity right now feels less like an industry in motion and more like a “fish market” (an Italian way to say there’s too much chaos, noise, or confusion), I figured I’d put it down. Maybe someone else is walking the same line and will recognize themselves in it.
The starting point is almost too obvious to write: the technical work is being handed off to AI. Executing, that’s where it shines. Planning, already less so, and only within predefined standards because, in the end, it was trained on data produced by someone else. Defining a strategy? Forget it, how could it?
So if the technical layer slips below the waterline, the human’s value shifts. It shifts up.
The military world is unambiguous about this, and it’s one of those frames I’ve carried with me and keep finding ridiculously useful for cutting through the noise of this moment. The framework was formalized in U.S. Army doctrine and has since become standard across most NATO doctrines.
Three levels:
Three levels that are basically three different jobs with three different kinds of value. Most people already live inside this structure without naming it, let’s be honest, strip the military framing and you have how human hierarchies have always organized effort, the military made it explicit and enforced it from day one. Everywhere else it’s blurrier, but the shape is the same. Map yourself onto it and you immediately know what game you’re playing.
Two years ago, which feels like a geological era in this industry, the three-level model held for almost everyone. More complex organizations added more layers, but the basic shape was stable. That changed the moment the agentic model took hold. The shift happened when tools like Claude Code and Codex started turning prompts into running systems, agents into actual colleagues. That’s when the floor moved.
If every human being, with a pool of agents at their side, automatically gets bumped to the operational level, “capable of deploying resources to get to the objective” without necessarily being the best technician in the room, then the paradigm shifts for everyone, not just a few.
A caveat is needed here, otherwise I’d be selling an illusion. Having agents at your disposal does not make you operational, in the same way that handing soldiers to a soldier does not make him an officer. Anyone who has read the doctrine cited two paragraphs above knows this perfectly well: those are three different jobs, and the military separates them precisely because executing, coordinating, and defining an objective require very different muscles.
What actually changes is that the operational layer stops being a structural privilege, reserved for those who manage human teams inside an organization, and becomes a level accessible to anyone capable of thinking in those terms. The barrier is no longer organizational, it’s cognitive. And this is where the new divide opens up: between those who can only execute and those who can coordinate toward an objective, the gap in value explodes. Before, that gap was hidden by the fact that almost no one had access to a personal pool of executors, so the question never came up. Now it does, and it does for everyone.
If everyone is, in principle, the leader of a team, the bottleneck moves. The bravest executor stops being the rare commodity. What becomes scarce is the person with the strategic vision to reach the objective, and before that, the person who can define what the objective even is.
Before, the game was played with a small number of strong executors: technicians who could pull the trigger and hit the target with precision. Today, that capability simply doesn’t draw the line between good and great anymore. The line moves. What matters now is when to pull it, what consequences it triggers, how to read the situation and squeeze the maximum advantage out of it.
That last part is where I think most of the conversation about AI gets stuck, because we keep discussing whether AI can do X (write code, find vulnerabilities, build infrastructure…), and we miss that the interesting question lives one floor up.
Everything available today, especially in the “red team agentic magic world”, shares the same blind spot: it models tactical execution, not operational reasoning. It automates the how, not the whether, the when, or the at what cost. That distinction maps directly onto the operational layer, the one between the executor and the strategist.
This is where things get interesting, because it’s the layer where, in offensive security, the actually difficult questions live.
Take an offensive operation and look at the questions that genuinely govern it:
These are not strategic questions, let’s be precise about it: the objective is given. They’re not tactical either: the single action, in isolation, is the easy part. They’re operational, in the most textbook sense of the word. They’re about sequencing, about trade-offs between resources you’ve spent and resources you still have, about reading how the next move reshapes everything that comes after it.
Nothing available at agentic scale today reasons across those. The current generation of tools chains known techniques, optimizes for the next step, and it does it well, sometimes better than a junior operator. But campaign-level thinking, and the operational layer in general, is still entirely a human problem: every move reshapes what’s possible next, defenses can respond to any of them, and the whole thing has to hold together as a single coherent arc. And it’s, by a wide margin, the most interesting one on the board.
What’s actually shifting is the layer where humans add value. It moves up by one floor, and that’s not a small thing. The work just lives one level higher than it used to. The ones who learn to think operationally will find that the work has never been more interesting than it is right now.
Don’t fight to stay where you were. Climb.