Every security operations playbook ever written rests on a single unstated assumption: that attackers and defenders move at roughly the same human speed. In the past year that assumption quietly broke, and most defenders are still operating like it didn't.

The math is brutal, even setting aside the existential question that Project Glasswing made unignorable in April, when Anthropic restricted release of Claude Mythos Preview after concluding the model could chain zero-days with limited human direction. The CSA is already briefing CISOs on a post-Mythos exploit environment. Whether or not that capability is in adversary hands today, the defender's clock is set by frontier release cadence, not analyst hiring cadence.

Strike48's new State of Agentic Security report, based on a survey of 100 enterprise CISOs and mid-market security leaders, documents the true cost. SANS and the Cloud Security Alliance now consider the asymmetry an emergency, warning that defensive teams who have not adopted AI agents face "a widening capability gap against AI-augmented adversaries, regardless of their existing technical skill." 84% of security leaders agree AI agents should be doing Tier 1 work. Only 22% are willing to fully automate it. Only 36% have any agent running in production at all. The math from there is brutal.

Machine speed, human pipeline

The defender's job has always been to see fast and decide fast. That stopped being a fair contest the moment the offense automated both. Time-to-breakout, the window between an initial foothold and meaningful lateral movement, is now measured in minutes rather than the hours and days that defined SOC playbooks for two decades. The threat actor's reconnaissance, planning, and payload customization now compresses into a single API call. Texas Mutual CISO John Sapp told The Security Digest earlier this year that "the first place a human-driven SOC breaks down is speed."

The defender, meanwhile, is still queuing alerts. The L1 analyst clicks into a SIEM, opens a ticket, waits for context to load, asks for an enrichment, escalates to L2 on a Slack thread. Every step in that pipeline assumes a 24-hour incident response window. The window no longer exists.

Why hiring doesn't close the gap

The instinct is to throw more humans at the problem. The math doesn't work, and CISOs already know it. Even ignoring the impossible recruiting market for senior detection engineers, the constraint sits one layer deeper than headcount. Human cognitive throughput has a hard ceiling: a single analyst can only meaningfully read so many alerts in a shift, and that ceiling holds regardless of how many seats are filled.

When asked which SOC task they'd hand to AI agents tomorrow, 60% of respondents named alert triage and prioritization. The vote reads as an admission. Human triage is the rate-limiting step on the entire defensive operation, and the leaders running those operations have stopped pretending otherwise.

The agents-on-agents architecture

If adversaries operate at machine speed, the only structurally coherent defense is to operate at machine speed too. Not augmentation. Not "AI-assisted" workflows. Actual agents, with actual scope of action, running against actual data.

The market has converged on the architecture. Strike48 and other agentic security platforms describe the same primitives: micro-agents with narrow, defined scopes of action; full audit trails capturing what each agent did, why, and on what data; human-in-the-loop controls adjustable per workflow so the security team decides where the agent moves alone and where a human reviews first.

That last piece is the one that gets the policy room. Agentic defense keeps humans in the loop where the loop matters: the irreversible actions, the unfamiliar threat patterns, the edge cases the agents weren't trained against. Agents own the throughput problem; humans own the judgment problem.

Showing up to the fight blind

Even the 36% who have agents in production hit a structural problem the survey makes uncomfortable. 84% of respondents say their current tools cannot access all of their log data for investigations. 80% cite hot-storage costs as painful or a major budget concern. 65% have had at least one investigation stall in the last twelve months because the data they needed lived in a system their tools couldn't reach.

An agent fighting at machine speed against a faster opponent, on partial data, is fighting twice-disadvantaged. The hallucination concern that 69% of leaders cite has more to do with the data layer than the model layer. An agent asked to draw conclusions from a deliberately incomplete picture of the environment will extrapolate, and in security operations the line between extrapolation and hallucination disappears after the fact. CardinalOps' latest State of SIEM Detection Risk report adds a parallel disadvantage on the rules side: the average enterprise SIEM detects only about 21% of MITRE ATT&CK techniques even on the data it has ingested. Showing up to an agentic fight requires solving both: what the defender's agent can see, and what the SIEM is configured to recognize when it sees it.

The slow side keeps losing

The trust concerns are real. 52% of leaders cite distrust of agent outputs as the top barrier to deployment. 41% say the technology isn't mature enough. 29% don't believe their data infrastructure can support effective agents. None of these concerns is wrong, and none of them stops the asymmetry from compounding.

The 36% who have agents in production are making a specific wager: that the cost of waiting is higher than the cost of imperfect deployment. The other 64% are running the inverse experiment, whether the existing analyst-driven SOC can hold the line against AI-augmented adversaries while the trust gap closes, and they will not get to publish the results. The second-order story is what the 36% are doing to their org charts in the process.

The full report is available at Strike48: State of Agentic Security: Breaking Through the Trust Barrier.