
Sr. Technical Content Strategist

Revisiting a conversation between LimaCharlie co-founder Christopher Luft and Chris Cochran, Field CISO & Vice President of AI Security at SANS Institute, on The Cybersecurity Defenders Podcast.
For most of cybersecurity’s history, defenders could operate under a safe assumption: somewhere on the other end of an attack, a human was making decisions. Scripts might automate parts of the kill chain, tools might accelerate execution, but a person was in the loop. Human-led attacks introduced a rhythm, an identifiable tempo, and a set of behavioral fingerprints that skilled analysts recognized.
AI has changed all of that. The detection frameworks most organizations are running today aren't built to identify agentic attacks.
Agentic AI has created a third behavioral category that occupies new territory, one that most current detections aren't built to identify.
Velocity-based detections are one example. A human doing recon takes minutes. A script takes seconds. An agent lands somewhere in between, depending on what the model encounters and how it reasons about results. Simple threshold rules built around either baseline will systematically miss agentic activity.
Path-based detections are another example. Traditional automation follows patterns you can predict with fixed-sequence signatures. AI agents don't follow fixed paths, they navigate. Their sequence of techniques and tools will vary run-to-run based on what they discover. Fixed-sequence signatures offer poor coverage against AI’s adaptive behavior.
Organizations that close this gap now will be better positioned as AI-driven attacks proliferate.
There are effectively three categories of attacker behavior your telemetry will encounter, each with distinct characteristics.
Human-paced attacks are the oldest and most familiar. Manual navigation, command-line decisions, think time between steps. The signature is unmistakable once you know it: human typing cadence, variable dwell times, slightly irregular tool usage. Even experienced operators leave a human rhythm in the logs.
Traditional automation is faster and more mechanical. Scripts execute defined sequences at machine speed with zero hesitation. The signature is almost too clean: perfectly regular timing, deterministic command sequences, no variance. For all their speed, automated attacks remain brittle. When automation hits an unexpected condition, it breaks. This sudden cessation of activity is detectable in post-incident analysis.
Agentic attacks occupy a new middle ground. An agent moves faster than a human but slower than a script, because it's actually reasoning. It's evaluating what it found, deciding among possible next steps and backtracking when something doesn't work. The behavioral signature reflects this: machine-speed execution punctuated by irregular pauses, non-deterministic path selection, adaptive pivots when initial approaches fail. It looks like automation that pauses to think.
During the podcast conversation, Cochran described a hackathon his team ran specifically focused on building a honeypot for AI agents. The first challenge was just establishing the fingerprinting criteria: what does agentic behavior actually look like versus normal automation versus manual activity? This foundational work is exactly what most organizations need to start now.
The right approach to detecting AI behavior is enriching existing analytics with the signals that distinguish agentic activity from the alternatives.
A few starting points:
Timing variance at machine scale. Agents produce irregular inter-command timing at speeds faster than humans. Fast but not metronomic is a distinguishing signal.
Adaptive lateral movement. Sequences of failed attempts followed by successful pivots at machine speed are a distinct pattern worth flagging.
Reasoning artifacts. Some agentic frameworks leave traces: API calls to model endpoints, prompt construction patterns, and characteristic tool invocation sequences that reveal the underlying agentic origin.
Honeypot divergence. A script ignores anything it wasn't programmed to seek. An agent investigates. Planting decoy resources and watching how unknown actors respond can surface adaptive behavior that other detections miss.
LimaCharlie's Agentic SecOps Workspace is built for the flexible, high-fidelity detection engineering work needed to identify AI attacks. Analysts get the precise endpoint telemetry needed to measure inter-command timing variance, the flexible detection layer to build adaptive lateral movement signatures, and an API-first architecture that integrates naturally with honeypot tooling. As the fingerprints change, the platform supports the engineering work to keep pace.
Agentic attacks aren't a future problem. The early wave is already here. The behavioral signatures that distinguish them from human operators and traditional automation are real and detectable, for those who know how to look.
Organizations that build AI detection capabilities now, before the volume increases, will be in a fundamentally better position than those that wait.
Ready to build detection that keeps pace with agentic threats?
Visit limacharlie.io
440 N Barranca Ave #5258
Covina, CA 91723
5307 Victoria Drive #566
Vancouver, BC V5P 3V6
Stay up-to-date on all things LimaCharlie with our monthly newsletter.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.