
Sr. Technical Content Strategist

AI tools are moving faster than the security controls meant to govern them.In this episode of Defender Fridays, Cisco's Cybersecurity Technical Solutions Architect Katherine McNamara walks through changes in the threat landscape as organizations rush to integrate AI without applying basic security discipline.
When Katherine meets with customers to discuss AI security, the conversation almost always starts and ends in the same place: data leakage. Someone might upload sensitive files to a public LLM. That's a real but narrow risk.
The deeper exposure comes from what organizations build on top of AI, specifically integrations with backend systems that inherit whatever access the model is granted. A customer service chatbot connected to a production database with read/write access is an attack surface. Without scoped permissions and deterministic access controls, that access is effectively available to anyone who can craft the right prompt.
One of the sharpest observations in the conversation: prompt injection works because LLMs can't distinguish between system instructions and user input at a fundamental level. Every new session starts without memory of prior interactions, which means an attacker gets unlimited attempts to find the right framing.
Filters don't offer the categorical certainty of a firewall rule. If a rule blocks destination port 80, every port 80 packet is dropped, no exceptions. A system prompt telling an LLM to ignore certain instructions is more like telling a human not to fall for social engineering. It works until it doesn't.
Hard gates on what the model can access in the first place matter more than hoping the model holds the line.
A written policy against unauthorized AI tool use doesn't stop anyone. Katherine's point is direct: tech-savvy users, including security professionals, are downloading agentic tools, granting them root access or OAuth permissions to work systems, and integrating them without sandboxing or scoping.
She references the Vercel breach as a recent example of how liberal permission grants to AI tooling became a vector. MDM-style governance for AI tools, controlling what can be installed, what it can connect to, and with what permissions, is where the industry needs to go.
The risks Katherine describes are exactly the kind of environment LimaCharlie's Agentic SecOps Workspace is built for. Every AI action in the ASW is observable, auditable, and scoped to the permissions you define.
AI agents operate through the same APIs as human analysts, with no hidden execution layer and no privileged black box. For MSSPs managing security across multiple client environments, that level of control is what makes agentic AI viable rather than risky.
440 N Barranca Ave #5258
Covina, CA 91723
5307 Victoria Drive #566
Vancouver, BC V5P 3V6
Stay up-to-date on all things LimaCharlie with our monthly newsletter.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.