
Sr. Technical Content Strategist

Picture a SOC that investigates its own alerts, hunts threats across customer tenants, isolates compromised endpoints, and writes its own detection rules. Envision the same SOC attacking itself every morning to find the gaps it missed, all before your analysts arrive for the day.
This is not a roadmap item, but an operational reality on LimaCharlie. It’s what agentic AI security looks like on a platform built to support it.
The security industry has spent the past two years talking about AI in the SOC. Most vendors are still struggling to bridge the operational gap. They’re fixated on advisory capabilities: AI that reads alerts and suggests next steps, summarizes incidents, and recommends actions for a human to execute. That is useful, but it is not the same as what LimaCharlie does.
LimaCharlie has been building and demonstrating something structurally different. Networks of AI agents divide security work between them, hand off findings to each other, take autonomous action within defined guardrails, and produce auditable records of everything they did.
This is agentic security, with AI acting as operators in a SOC rather than as advisors.
Most vendors cannot do this. Their platforms were never built for it. They are UI-first, with AI retrofitted on top. The result is agents with limited reach and fragmented context.
Agents need API access, not interfaces designed for humans. They need the ability to read telemetry, write detection rules, trigger response actions, open cases, and generate reports, all through the same access layer.
LimaCharlie was built API-first from the beginning. Every platform function is available to AI agents through the same access layer as human operators. That architectural decision is what makes agentic SecOps possible.
Before getting into what the agents do, it is worth understanding why LimaCharlie structures things as composable agents rather than a single monolithic AI.
The answer comes down to specialization, cost, and control.
A single AI agent asked to handle everything across triage, deep investigation, containment, threat hunting, detection engineering, and reporting would be expensive to run, slow to execute, and hard to audit. More importantly, it would be brittle.
Overloading an agent with too many responsibilities and too many guardrails degrades the very quality that makes frontier models powerful: their ability to reason through novel problems with minimal constraint.
The composable approach treats each agent as a specialist:
A triage agent looks at a batch of alerts, recognizes patterns, and decides what needs escalation
An L2 investigation agent goes deep on a single case
A containment agent takes action to isolate threats
A threat hunter agent investigates indicators laterally
A detection engineering agent writes rules
Each agent is scoped to its role, given the permissions it needs and no more. Tasks are handed off to a specialist AI agent when the work requires its specific capability.
This maps directly to how mature human SOC teams are structured. The difference is that these agents run continuously, do not get tired, and can operate across hundreds of customer tenants simultaneously.
In LimaCharlie, agents are defined as records: a prompt, a model, a permissions policy, and a set of available tools. They are plain text, fully visible, forkable, and modifiable. There is no proprietary wrapping, no hidden execution layer.
Changing how an agent behaves is as straightforward as editing the prompt. Understanding why an agent did something means opening the session log and reading it.
Building a new agent or a new team of agents takes minutes, not months. In a recent public demonstration, Maxime showed that a working breach and attack simulation pipeline took approximately 30 minutes to build from existing platform components.
The platform's composable infrastructure handles the heavy lifting. The AI handles the complexity. The operator defines the intent.
To understand how cooperating agents handle a real incident, consider what happened when LimaCharlie ran a simulated multi-stage attack using Atomic Red Team against a test environment.
The attack generated multiple detections across an endpoint: initial access, execution, lateral movement, credential access. Rather than treating these as separate alerts, the first agent on the case:
Recognized the pattern as a kill chain.
Consolidated the alerts into a single case.
Identified the relevant entities (IPs, hashes, hostnames).
Tagged them for correlation against other cases in the system.
A second agent, functioning as an L2 analyst, took the case from there. Rather than summarizing alerts, it built a full attack timeline, mapped the techniques used at each stage of the kill chain, assessed the scope of the compromise, and produced a clear assessment: post-exploitation activity confirmed, domain controller potentially at risk, immediate containment recommended.
The L2 agent tagged the containment agent directly in the case, the same way a human analyst would tag a colleague, with its findings and a recommendation to isolate the endpoint. The containment agent reviewed the evidence, confirmed the assessment, and isolated the endpoint from the network. It documented what it did and flagged follow-up actions, including a note that the domain controller lacked a LimaCharlie sensor and therefore had no visibility.
Simultaneously, a threat hunting agent began searching laterally across the environment for signs that the intrusion had spread. It reported its findings back into the same case, giving the human analyst a complete picture: what happened, what was done about it, where visibility gaps exist, and what to do next.
The entire workflow, triage, investigation, containment, hunting, ran through case management, which served as the coordination layer. Every agent action was logged. Every handoff was documented. A PDF report of the whole process could be exported and shared with a customer or manager with a single click.

Picture a SOC that investigates its own alerts, hunts threats across customer tenants, isolates compromised endpoints, and writes its own detection rules. Envision the same SOC attacking itself every morning to find the gaps it missed, all before your analysts arrive for the day.
This is not a roadmap item, but an operational reality on LimaCharlie. It’s what agentic AI security looks like on a platform built to support it.
The security industry has spent the past two years talking about AI in the SOC. Most vendors are still struggling to bridge the operational gap. They’re fixated on advisory capabilities: AI that reads alerts and suggests next steps, summarizes incidents, and recommends actions for a human to execute. That is useful, but it is not the same as what LimaCharlie does.
LimaCharlie has been building and demonstrating something structurally different. Networks of AI agents divide security work between them, hand off findings to each other, take autonomous action within defined guardrails, and produce auditable records of everything they did.
This is agentic security, with AI acting as operators in a SOC rather than as advisors.
Most vendors cannot do this. Their platforms were never built for it. They are UI-first, with AI retrofitted on top. The result is agents with limited reach and fragmented context.
Agents need API access, not interfaces designed for humans. They need the ability to read telemetry, write detection rules, trigger response actions, open cases, and generate reports, all through the same access layer.
LimaCharlie was built API-first from the beginning. Every platform function is available to AI agents through the same access layer as human operators. That architectural decision is what makes agentic SecOps possible.
Before getting into what the agents do, it is worth understanding why LimaCharlie structures things as composable agents rather than a single monolithic AI.
The answer comes down to specialization, cost, and control.
A single AI agent asked to handle everything across triage, deep investigation, containment, threat hunting, detection engineering, and reporting would be expensive to run, slow to execute, and hard to audit. More importantly, it would be brittle.
Overloading an agent with too many responsibilities and too many guardrails degrades the very quality that makes frontier models powerful: their ability to reason through novel problems with minimal constraint.
The composable approach treats each agent as a specialist:
A triage agent looks at a batch of alerts, recognizes patterns, and decides what needs escalation
An L2 investigation agent goes deep on a single case
A containment agent takes action to isolate threats
A threat hunter agent investigates indicators laterally
A detection engineering agent writes rules
Each agent is scoped to its role, given the permissions it needs and no more. Tasks are handed off to a specialist AI agent when the work requires its specific capability.
This maps directly to how mature human SOC teams are structured. The difference is that these agents run continuously, do not get tired, and can operate across hundreds of customer tenants simultaneously.
In LimaCharlie, agents are defined as records: a prompt, a model, a permissions policy, and a set of available tools. They are plain text, fully visible, forkable, and modifiable. There is no proprietary wrapping, no hidden execution layer.
Changing how an agent behaves is as straightforward as editing the prompt. Understanding why an agent did something means opening the session log and reading it.
Building a new agent or a new team of agents takes minutes, not months. In a recent public demonstration, Maxime showed that a working breach and attack simulation pipeline took approximately 30 minutes to build from existing platform components.
The platform's composable infrastructure handles the heavy lifting. The AI handles the complexity. The operator defines the intent.
To understand how cooperating agents handle a real incident, consider what happened when LimaCharlie ran a simulated multi-stage attack using Atomic Red Team against a test environment.
The attack generated multiple detections across an endpoint: initial access, execution, lateral movement, credential access. Rather than treating these as separate alerts, the first agent on the case:
Recognized the pattern as a kill chain.
Consolidated the alerts into a single case.
Identified the relevant entities (IPs, hashes, hostnames).
Tagged them for correlation against other cases in the system.
A second agent, functioning as an L2 analyst, took the case from there. Rather than summarizing alerts, it built a full attack timeline, mapped the techniques used at each stage of the kill chain, assessed the scope of the compromise, and produced a clear assessment: post-exploitation activity confirmed, domain controller potentially at risk, immediate containment recommended.
The L2 agent tagged the containment agent directly in the case, the same way a human analyst would tag a colleague, with its findings and a recommendation to isolate the endpoint. The containment agent reviewed the evidence, confirmed the assessment, and isolated the endpoint from the network. It documented what it did and flagged follow-up actions, including a note that the domain controller lacked a LimaCharlie sensor and therefore had no visibility.
Simultaneously, a threat hunting agent began searching laterally across the environment for signs that the intrusion had spread. It reported its findings back into the same case, giving the human analyst a complete picture: what happened, what was done about it, where visibility gaps exist, and what to do next.
The entire workflow, triage, investigation, containment, hunting, ran through case management, which served as the coordination layer. Every agent action was logged. Every handoff was documented. A PDF report of the whole process could be exported and shared with a customer or manager with a single click.

For MSSPs, the application is straightforward. Daily simulations across customer tenants, per-customer gap reports, and continuously improving detection coverage are the kinds of deliverables that take significant manual effort to build.
On LimaCharlie, they are a natural extension of infrastructure that is already running.
The two workflows above share an important design principle: AI in the SOC should be composable, transparent, and owned by the operator, not locked inside a vendor's black box.
Composable means you can build the team your operations require, modify it as your needs change, and add new agents as new use cases emerge. A GitHub security monitor, a malware reverse engineering agent, a weekly customer report agent: each is built from the same components. The platform does not limit you to the use cases the vendor anticipated.
Transparent means every agent's prompt, every session log, and every action taken is visible and auditable. The same permission model that governs human analysts governs AI agents. There is no separate trust model for AI.
Owned means when a better model ships, you adopt it immediately. When your customer's environment changes, you update the agent prompt. You are not waiting on a vendor's release cycle or negotiating access to a capability your contract does not cover.
This is agentic AI security the way practitioners have envisioned it: AI operating as part of the SOC, with humans setting the guardrails, reviewing the outputs, and staying in control at every step.
The SOC that investigates its own alerts, hunts its own threats, and patches its own detection gaps every morning is not theoretical. It is running on LimaCharlie right now.
See our AI agent repo: https://github.com/refractionPOINT/lc-ai/
Visit limacharlie.io to explore the platform or get in touch with the team directly.
440 N Barranca Ave #5258
Covina, CA 91723
5307 Victoria Drive #566
Vancouver, BC V5P 3V6
Stay up-to-date on all things LimaCharlie with our monthly newsletter.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.