
Sr. Technical Content Strategist

Anyone who's stood up a SIEM from scratch knows the feeling: weeks of infrastructure work, integration headaches, and a services team alongside for the whole process. That experience shaped how people think about adopting anything new in security ops. The instinct is to treat AI the same way: budget for it, plan for it, bring in specialists.
This instinct is costing teams real time. Traditional infrastructure takes great effort to stand up. Infrastructure-as-code happens in seconds.
There's an assumption in security operations that deploying AI is a heavy lift: big contract, professional services engagement, and months of configuration before anything works. That assumption made sense in the past but no longer does.
LimaCharlie CEO, Maxime Lamothe-Brassard, observed in a recent video that the AI barrier is no longer the technology. It's the mental model security professionals carried over from an earlier era.
In his demo, Maxime built a complete agentic security workflow from a single plain-English prompt. He did it without pre-built templates or services engagement. The workflow:
Detects high-risk GitHub audit log events
Sends a Slack message to a human operator asking if activities are intentional
Hands off follow-up operations to an AI agent
If the human operator approves the activity, the agent documents the case and closes it. If the action is flagged, the agent investigates the user's recent activity, marks the case critical, and drops findings into notes.
The whole automated process is written in plain English. Changing how the investigation runs is as simple as editing a sentence.
When standing up an AI incident response workflow in your current environment feels expensive or slow, that friction is worth examining. It isn't inherent to AI or to agentic AI security broadly. It's the result of how a given platform is architected.
When AI capabilities are gated behind specific SKUs, or when every new use case requires another services conversation, the platform is the bottleneck.
A platform built for agentic work should move fast by default, with AI operating across detection engineering, investigation, case management, and threat intel. Artificial boundaries between functions create unnecessary friction. The constraint is always the environment, not the AI.
For MSSPs, this distinction carries real weight. Agentic security for MSSPs means AI operating across all tenants with the same API access as a human analyst.
The teams making real progress with AI in SecOps share one characteristic: they start experimenting before having a formal plan. Without waiting on a pilot program or vendor evaluation, they tried building something, saw it work, and kept going.
That's the actual path forward. Pick a workflow that's currently manual and repetitive. Write a prompt describing what it should do. See what comes back. Adjust. The fluency, confidence, and reliability come from doing, not planning.
If you're on LimaCharlie, our agentic AI repo is public and in plain markdown. Start there. If you hit a wall, the community is active and the team is reachable.
If you're ready to experiment with agentic SecOps, start at https://app.limacharlie.io/signup
440 N Barranca Ave #5258
Covina, CA 91723
5307 Victoria Drive #566
Vancouver, BC V5P 3V6
Stay up-to-date on all things LimaCharlie with our monthly newsletter.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.