Presented by Splunk, a Cisco Company
As AI rapidly evolves from a theoretical promise to an operational reality, CISOs and CIOs face a fundamental challenge: how to harness AI’s transformative potential while maintaining the human oversight and strategic thinking that security demands. The rise of agentic AI is reshaping security operations, but success requires balancing automation with accountability.
The efficiency paradox: Automation without abdication
The pressure to adopt AI is intense. Organizations are being pushed to reduce headcount or redirect resources toward AI-driven initiatives, often without fully understanding what that transformation entails. The promise is compelling: AI can reduce investigation times from 60 minutes to just 5 minutes, potentially delivering 10x productivity improvements for security analysts.
However, the critical question isn’t whether AI can automate tasks — it’s which tasks should be automated and where human judgment remains irreplaceable. The answer lies in understanding that AI excels at accelerating investigative workflows, but remediation and response actions still require human validation. Taking a system offline or quarantining an endpoint can have massive business impact. An AI making that call autonomously could inadvertently cause the very disruption it’s meant to prevent.
The goal isn’t to replace security analysts but to free them for higher-value work. With routine alert triage automated, analysts can focus on red team/blue team exercises, collaborate with engineering teams on remediation, and engage in proactive threat hunting. There’s no shortage of security problems to solve — there’s a shortage of security experts to address them strategically.
The trust deficit: Showing your work
While confidence in AI’s ability to improve efficiency is high, skepticism about the quality of AI-driven decisions remains significant. Security teams need more than just AI-generated conclusions — they need transparency into how those conclusions were reached.
When AI determines an alert is benign and closes it, SOC analysts need to understand the investigative steps that led to that determination. What data was examined? What patterns were identified? What alternative explanations were considered and ruled out?
This transparency builds trust in AI recommendations, enables validation of AI logic, and creates opportunities for continuous improvement. Most importantly, it maintains the critical human-in-the-loop for complex judgment calls that require nuanced understanding of business context, compliance requirements, and potential cascading impacts.
The future likely involves a hybrid model where autonomous capabilities are integrated into guided workflows and playbooks, with analysts remaining involved in complex decisions.
The adversarial advantage: Fighting AI with AI — carefully
AI presents a dual-edged sword in security. While we’re carefully implementing AI with appropriate guardrails, adversaries face no such constraints. AI lowers the barrier to entry for attackers, enabling rapid exploit development and vulnerability discovery at scale. What was once the domain of sophisticated threat actors could soon be accessible to script kiddies armed with AI tools.
The asymmetry is striking: defenders must be thoughtful and risk-averse, while attackers can experiment freely. If we make a mistake implementing autonomous security responses, we risk taking down production systems. If an attacker’s AI-driven exploit fails, they simply try again with no consequences.
This creates an imperative to use AI defensively, but with appropriate caution. We must learn from attackers’ techniques while maintaining the guardrails that prevent our AI from becoming the vulnerability. The recent emergence of malicious MCP (Model Context Protocol) supply chain attacks demonstrates how quickly adversaries exploit new AI infrastructure.
The skills dilemma: Building capabilities while maintaining core competencies
As AI handles more routine investigative work, a concerning question emerges: will security professionals’ fundamental skills atrophy over time? This isn’t an argument against AI adoption — it’s a call for intentional skill development strategies. Organizations must balance AI-enabled efficiency with programs that maintain core competencies. This includes regular exercises that require manual investigation, cross-training that deepens understanding of underlying systems, and career paths that evolve roles rather than eliminate them.
The responsibility is shared. Employers must provide tools, training, and culture that enable AI to augment rather than replace human expertise. Employees must actively engage in continuous learning, treating AI as a collaborative partner rather than a replacement for critical thinking.
The identity crisis: Governing the agent explosion
Perhaps the most underestimated challenge ahead is identity and access management in an agentic AI world. IDC estimates 1.3 billion agents by 2028 — each requiring identity, permissions, and governance. The complexity compounds exponentially.
Overly permissive agents represent significant risk. An agent with broad administrative access could be socially engineered into taking destructive actions, approving fraudulent transactions, or exfiltrating sensitive data. The technical shortcuts engineers take to “just make it work” — granting excessive permissions to expedite deployment — create vulnerabilities that adversaries will exploit.
Tool-based access control offers one path forward, granting agents only the specific capabilities they need. But governance frameworks must also address how LLMs themselves might learn and retain authentication information, potentially enabling impersonation attacks that bypass traditional access controls.
The path forward: Start with compliance and reporting
Amid these challenges, one area offers immediate, high-impact opportunity: continuous compliance and risk reporting. AI’s ability to consume vast amounts of documentation, interpret complex requirements, and generate concise summaries makes it ideal for compliance and reporting work that has traditionally consumed enormous analysts’ time. This represents a low-risk, high-value entry point for AI in security operations.
The data foundation: Enabling the AI-powered SOC
None of these AI capabilities can succeed without addressing the fundamental data challenges facing security operations. SOC teams struggle with siloed data and disparate tools. Success requires a deliberate data strategy that prioritizes accessibility, quality, and unified data contexts. Security-relevant data must be immediately available to AI agents without friction, properly governed to ensure reliability, and enriched with metadata that provides the business context AI cannot understand.
Closing thought: Innovation with intentionality
The autonomous SOC is emerging — not as a light switch to flip, but as an evolutionary journey requiring continuous adaptation. Success demands that we embrace AI’s efficiency gains while maintaining the human judgment, strategic thinking, and ethical oversight that security requires.
We’re not replacing security teams with AI. We’re building collaborative, multi-agent systems where human expertise guides AI capabilities toward outcomes that neither could achieve alone. That’s the promise of the agentic AI era — if we’re intentional about how we get there.
Tanya Faddoul, VP Product, Customer Strategy and Chief of Staff for Splunk, a Cisco Company. Michael Fanning is Chief Information Security Officer for Splunk, a Cisco Company.
Cisco Data Fabric provides the needed data architecture powered by Splunk Platform — unified data fabric, federated search capabilities, comprehensive metadata management — to unlock AI and SOC’s full potential. Learn more about Cisco Data Fabric.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.
