
Sr. Technical Content Strategist

There is a version of AI SOC that most security teams are familiar with. It summarizes alerts. It surfaces recommendations. It tells an analyst what to look at next. It is useful in the way a well-organized report is useful: it saves time reading, but the work still happens at a human pace.
That version of AI is not what this blog is about.
For MSSPs and SecOps teams operating at scale, advisory AI is not a destination. In fact, it presents a bottleneck in a different form. If every AI output still requires a human to review, approve, and act, then you have not changed the fundamental constraint on your operations. You have just moved it upstream.
Alert volume keeps climbing. Threat actor dwell time keeps shrinking. The analyst-to-alert ratio stays broken.
Adding an AI chat interface to the SOC will not move an organization ahead in a meaningful way. The real gains manifest when AI can perform security operations: with access, autonomy, and accountability.
That shift is called being AI operator-first, and most platforms simply are not built for it. LimaCharlie can bridge the gap between AI and security architecture, but first, it’s important to understand what creates it.
The challenge in becoming AI Operator-first is purely architectural.
Most security platforms were built for human operators. Interfaces were designed for screens. Workflows were designed for clicks. Integrations were built point-to-point, one tool at a time. Systems are UI-driven, with users assumed to be human.
When AI arrived, vendors layered assistants on top of pre-existing systems as a feature. The underlying platform stayed the same. It was still built for humans, not agents.
The result is AI that can read your environment but not act in it. AI that can suggest a detection rule but cannot write, test, and deploy one. AI that can identify a compromised endpoint but cannot isolate it. The gap between what the AI sees and what it can do is filled by a human analyst.
Speed is one problem, but scale is another. An MSSP managing hundreds of customer tenants cannot afford an analyst-in-the-loop for every AI output across every environment. The math does not work. Operations that rely on AI advice but human action still scale with headcount. More customers means more staff, regardless of how good the AI summaries get.
There is also a transparency problem baked into most AI SOC products. Vendor’s models promise specific outcomes but their mechanics remain opaque. What model is running? What data is it acting on? Why did it make that call?
Security teams are expected to trust a black box in a high-stake environment where decisions have consequences. Rejecting black-box AI under these conditions is reasonable.
Building for AI operators from the ground up means making different architectural decisions at the ground level.
LimaCharlie built the Agentic SecOps Workspace to be API-first from day one. Every function available in the web interface is also available via API: detection and response, telemetry queries, case management, endpoint actions, tenant administration, and more.
This was true of our platform before AI became a conversation in this industry. It turns out that design decision was the right one for making AI operations practical.
An AI agent has the same API access as a human operator, so it can do the same things. It can query telemetry, write and test detection rules, trigger response actions, open and update cases, generate reports, and onboard infrastructure. The distinction between human and agent collapses down to a single variable: the permissions policy you set.
Every action an AI agent takes on LimaCharlie is visible, logged, and auditable. The same RBAC model that governs human analysts applies to AI agents. You can grant an agent investigation access without remediation authority. You can scope an agent to specific tenants only. There is no separate trust model for AI. There is no black box.
That auditability has a home. Every action an agent takes surfaces in LimaCharlie's case management system: findings are written as cases, investigations are updated as they progress, and the full record of what ran, what the agent called, and what it concluded is readable after the fact. Nobody has to dig through session logs to understand what an agent did. The case system is where AI communicates back to the team, which means the no-black-box claim isn't a design philosophy. It's a specific place you can click into and read.
There is a specific way of thinking about automation that most security teams are trained to use. It’s worth examining because this conceptual framework has to change.
Traditional automation (SOAR playbooks, scripted workflows, pre-built integrations) work by trigger. Something happens, a rule fires, a sequence of steps executes. The logic is written in advance. The steps are known. If a situation falls outside the defined path, the automation stops and a human picks it up.
AI agents are different. Give an agent a goal, it figures out the steps.

LimaCharlie CEO Maxime Lamothe-Brassard puts it like this: you can tell the AI, "Find every tenant that used this IP at 9 a.m. six months ago. On each matching endpoint, run a live survey and pull all Adobe software versions. Cross-reference that against CVEs released in the past six months for Adobe products."
This is different from a pre-built workflow that automation iterates through. This is analyst intent, stated in plain language, and executed by an AI agent that decides how to accomplish it. It’s AI doing security operations the way we all envisioned it should.
It may run a data lake query, write results to disk, search the web for CVE data, write a script to cross-reference the two datasets, and surface findings as a case. The analyst did not script any of that. They simply state what they need done.
Pre-built integrations are fixed. They cover the use cases someone anticipated during design. Goals, on the other hand, are open-ended. An agent working toward a goal will route around obstacles, chain tools together, write intermediate scripts, and find paths to an answer.
For a SOC operating across hundreds of tenants, this means the work you can automate is no longer restricted to what you had time to build. Automation extends to anything you can describe.
This shift in automation flips the economics. Automation is no longer a fragile process with a human bottleneck that covers the easy, predictable cases. AI agents can take on complex, multi-step, cross-tenant work.
This is where headless operation becomes the practical delivery mechanism. Agents in the Agentic SecOps Workspace don't require a human at the keyboard to run. They fire from detections, case events, external webhooks, or direct API calls, and they execute across every tenant in scope.
An MSSP can define an agent once and have it run autonomously across hundreds of customer environments, surfacing findings without a single analyst initiating the work. The goal-based model only changes operations at scale if the agents executing those goals run without waiting for human input.
Try agentic SecOps for free
LimaCharlie offers a community edition you can deploy in your own environment today, at no cost. If you are operating an MSSP or building out a SOC and want to see what AI operations look like on a platform built for it from the start, this is the place to begin.
See our AI agent repo: https://github.com/refractionPOINT/lc-ai/
Visit https://app.limacharlie.io/signup to get started or get in touch with the team directly.
440 N Barranca Ave #5258
Covina, CA 91723
5307 Victoria Drive #566
Vancouver, BC V5P 3V6
Stay up-to-date on all things LimaCharlie with our monthly newsletter.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.