Skip to content
Blog

The AI SOC Hype Cycle

Ira Goldstein

CEO

May 5, 2026

The positioning around AI in the security operations center has shifted from "when" to "now." While components of AI in the SOC have become much more common over the past year, widespread usage and wholesale platform shifts are extremely rare. Tier 1 agents can triage alerts, investigate incidents and draft response actions, all without a human in the loop. Vendors are racing to announce agentic SOC capabilities, and CISOs are under pressure to adopt them.

But here's what most of that conversation is missing: AI agents don't fix a broken SOC. They inherit it.

If your detections are noisy, your telemetry is siloed and your response workflows are ad hoc, an AI agent will execute all of that, faster and at scale. The promise of the AI-enabled SOC collapses under the weight of poor fundamentals, despite platform vendor claims.

Seeing Through the AI SOC Noise

Earlier this year, I outlined five strategic investments CISOs need to make for 2026: attack-informed defenses, Detection-as-Code, unified telemetry, response playbooks and adversary simulation. To recap briefly:

  1. Attack-informed defenses — embedding continuous offensive insights, including recurring purple team exercises, directly into daily operations so every simulated attack strengthens real defenses.
  2. Detection-as-Code (DaC) — treating detection logic like software development: version-controlled, tested, auditable and deployable consistently across environments.
  3. Unified telemetry and full-fidelity data lakes — consolidating data across tools and environments to eliminate blind spots and enable the correlation that siloed systems can't provide.
  4. Response playbooks — structured automation that enforces consistent, repeatable response workflows and reduces adversaries dwell time.
  5. Dedicated adversary simulation — frequent, agile attack simulations that keep defenses calibrated to real threats.

The AI-enabled SOC narrative reached early prominence at RSAC 2025. What's changed over a year later is the noise level. The vendor proliferation, and procurement pressure. The reasonable expectation that every SOC should have an AI strategy and have it now.

That pressure makes the fundamentals more important, not less. Gartner predicts that 70% of large SOCs will pilot AI agents to augment operations by 2028 — but only 15% will achieve measurable improvements without structured evaluations. Many organizations investing in agentic SOC capabilities won't see meaningful results. Not because the technology fails them. Because the foundation beneath it does.

Those five investments aren't just good hygiene for the modern SOC. They are the prerequisites for a successful AI SOC.

Garbage In, Garbage Out, At Agent Speed.

Agentic AI systems are only as effective as the inputs they operate on. Consider Detection-as-Code. When detection logic is version-controlled, tested and validated, an AI agent can be trusted to reason against it. When it isn't, when business logic inputs to detections are undocumented, and rules inconsistently tuned producing false positives, an agent will dutifully chase every bad signal, creating more noise, not less.

The same applies to telemetry. Unified, full-fidelity data lakes give AI agents the context they need to distinguish a genuine intrusion from a misconfigured endpoint. Fragmented, siloed data leaves agents operating blind in exactly the areas that matter most — lateral movement, privilege escalation, cross-environment threats.

And without adversary simulation baked into operations, agents have no way to validate their own detection gaps. A model trained on historical alerts will miss novel techniques. Consistent red team exercises and purple teaming, with a combination of automation and human ingenuity, ensure that what your agents are looking for actually reflects what adversaries are doing today.

What Works in the Agentic SOC: Three Real Examples

At UltraViolet Cyber, we've deployed agentic runbooks across our SOC operations, and the results speak to what's possible when the foundation is right.

Take identity abuse investigations, one of the highest-volume, most time-sensitive alert categories any SOC handles. Privileged account brute force attempts and suspicious user activity reports used to require 45 minutes of analyst investigation time per alert. With the runbook deployed, that same investigation completes in 15 minutes — a 67% reduction. The AI handles the cognitive heavy lifting of enrichment, correlation and pivoting. The analyst reviews the output, validates priority and context, and retains full decision authority before any response action is taken.

Web activity investigations tell a similar story. Malicious URL detections and potentially unwanted application alerts previously consumed 25 minutes of analyst time. The runbook brings that down to 3 minutes — an 88% reduction. The speed matters, but so does the consistency. Every investigation follows the same structured logic, with an analyst in the loop at the point of decision.

Cloud environment investigations round out the picture. AWS-based alerts — snapshot deletions, configuration changes, activity that signals potential data exposure or infrastructure manipulation — previously took 30 minutes to work through. The runbook completes the same investigation in 10 minutes, a 67% time savings that compounds quickly across a high-volume cloud environment. In each case, no action is taken, no case closed, without an analyst's eyes on it first.

Across all three, the pattern is the same: the AI didn't create the investigative process. It operationalized one that already existed and was already sound. Which means the ROI of your AI SOC investment is directly proportional to the quality of the foundation beneath it.

Human Judgment Isn't Optional

None of this replaces the need for experienced operators. What it changes is how their time is spent. When agents handle Tier 1 volume, the repetitive, pattern-matched investigations, analysts are freed to focus on Tier 2 and Tier 3 work: threat hunting, adversary emulation, tuning detection logic and reviewing the outputs that agents escalate.

The human role in the AI SOC is elevated and realizing that potential requires building a SOC where humans and agents are working from the same playbook: one built on structured detection, validated data and offensive-informed context. AI handles the volume. Humans handle the judgment. That division only works when the underlying systems give both sides what they need to operate with confidence.

The Fundamentals Are the AI Strategy

There's a version of the AI SOC conversation that treats agentic tools as a shortcut around investment in people, process and architecture. That version leads to a faster, more automated version of the same reactive posture security teams have struggled with for years.

The better path, and the one we're seeing work in practice, is to recognize that getting your house in order isn't preparation for the AI SOC. It is the AI SOC strategy.

Detection-as-Code, unified telemetry, continuous adversary validation, structured response automation: these aren't legacy concepts being replaced by AI. They are the conditions under which AI agents actually succeed.

As for the changing role of humans in the SOC? At UV, the human and the machine work in tandem. AI runbooks act as a force multiplier, automating the cognitive heavy lifting of enrichment, correlation, and pivoting, but decision authority stays with the analyst. An agentic AI can triage and work a runbook, but the case is always reassigned to a person before any response action is taken. No autonomous blocking. No auto-isolation. No case closure without an analyst's eyes on it. Tested guardrails to limit AI autonomy.

To deliver better security outcomes amidst the AI Hype Cycle, the strategy is clear: Build the foundation, an d the agents will follow.