Everyone is adding AI to their security stack right now, often without a clear answer to the question "what problem does this solve?" The gap between AI marketing and AI reality in cybersecurity is still wide. But the tools that work, when deployed with discipline, are genuinely useful.

Why this episode matters right now: As AI tools proliferate, practitioners need a framework for evaluating them against actual security workflows, not vendor promises. Ep 95 is a practitioner-level breakdown of where AI automation is genuinely useful and where it introduces new risk.

3 Key Takeaways:

  • Start with low-risk, high-repetition tasks. AI earns its place in SOC workflows by handling alert triage and log summarization before you trust it with anything consequential.

  • AI does not replace human judgment. Automated workflows still need human review for anything that results in a blocking action or a report to leadership.

  • Know where your data goes. Many AI tools send data to external APIs, which matters enormously when that data includes OT network telemetry or incident details.

Quick Intel Brief

MITRE ATT&CK is beginning to track AI-assisted attack techniques, including AI-generated phishing and automated reconnaissance. At the same time, NIST has released guidance on AI risk management (AI RMF) that applies directly to security tool procurement decisions. The industry is moving from "AI is coming" to "AI is already here and needs governance."

Aaron's Take

I am genuinely excited about AI for security, but I am cautious about organizations that jump to advanced AI workflows before they have the basics right. If you do not have good asset inventory, your AI tool is working with incomplete data and will give you incomplete answers. Get the foundation solid first, then layer in the automation. That sequence matters.

What To Do Next

What's your experience with AI tools in your security operations? Are they saving time or creating new problems? Hit reply and let me know.

Aaron Crow

Keep reading