LLM red teaming, GenAI governance, EU AI Act timelines, and the AppSec integration most programs are missing.

This week’s newsletter covers the AI security operational layer most programs haven’t built yet: how to test the LLMs already operating, how to govern GenAI use before the policy becomes shelfware, what the EU AI Act actually requires of your security team and when, and how to wire AI security into the AppSec and GRC workflows that already exist.

No new budget line required to start. Let’s get into it.

1. LLM Attack Surface Mapping: Where to Start

Before you can test your AI systems adversarially, you need to know what you have — and most organizations don’t.

The inventory gap is the first finding in nearly every AI security assessment we conduct. Procurement approved the tools. IT deployed them. Security wasn’t in the room for either conversation.

Start here:

The four questions that define your AI attack surface:

What data can each system access?
Map every data source connected to each AI deployment — SharePoint libraries, CRM records, HR systems, ticketing platforms, email. The context window is the attack surface. If the model can read it, an adversary can potentially extract it.

What can each system do autonomously?
Read-only systems and agentic systems are categorically different risk profiles. A copilot that summarizes documents is not the same as an agent that reads email, schedules meetings, and writes to file systems. Autonomy level determines blast radius.

Who can provide input to the system?
Internal users only? Authenticated customers? Unauthenticated public users? The broader the input population, the higher the probability of adversarial input. A customer-facing chatbot is a publicly accessible prompt injection surface.

What logging exists on model inputs and outputs?
This is the question most teams can’t answer. If model inputs and outputs aren’t logged, you have no forensic trail for an AI security incident. Detection and response require observability. Most enterprise AI deployments have none.

Run those four questions across every sanctioned AI system in your environment. The output is your attack surface map. It’s also the first deliverable for your AI red teaming program.

GenAI Acceptable Use Policy: Why Most Are Already Broken

Most enterprises now have a GenAI AUP. Most of them were drafted by legal, reviewed by HR, and communicated once via a company-wide email that 40% of employees opened.

That’s not a governance program. That’s a liability document.

The failure modes are predictable:

Too broad to enforce. “Do not enter confidential information into AI tools” is not an enforceable control. It requires employees to make a judgment call — every time, with no technical guardrail — about what qualifies as confidential. They will get it wrong. Consistently.

No data classification tie-in. An effective GenAI AUP maps permitted use cases to data classification levels. Public data: use approved tools freely. Internal data: use only approved enterprise tools with data residency controls. Regulated data: restricted use cases only, with documented approval. Without that mapping, the policy is a principle, not a control.

No approved tool catalog. A policy that prohibits unapproved AI tools without publishing a list of approved ones creates a shadow AI problem by design. Employees have legitimate productivity needs. If the approved path doesn’t exist, they’ll find an unapproved one.

No technical enforcement layer. Policy without enforcement is aspiration. Browser-based DLP rules, CASB integration for AI SaaS applications, and API gateway controls for AI tool access are the enforcement layer. Most GenAI AUPs have none of these implemented.

The template that works:
A three-tier use case model — open use, restricted use, prohibited use — mapped to your existing data classification framework, with an approved tool catalog, a fast-track review process for new tool requests, and a technical enforcement layer tied to your CASB or SSE platform.

AI Red Teaming: The Scoping Framework

This week we published the full AI red teaming framework as a long-form article. Here is the operational summary for security teams building the program.

The five attack vectors you’re testing for:

  • Prompt injection — malicious instructions in user input override system prompt behavior
  • Indirect injection via RAG — poisoned content in retrieved documents executes as model instruction
  • Data exfiltration via model outputs — sensitive context window data surfaces in responses, bypassing DLP
  • Jailbreaking — adversarial inputs bypass guardrails, exposing restricted capabilities or data
  • Model inversion — systematic output probing reconstructs training data including PII

The prioritization matrix:

Score each AI system: Data Access Level (1–5) × Autonomy Level (1–5) = Priority Score.

Scores of 20–25: test immediately. Any fully agentic system with access to regulated data lands here. Scores of 12–19: test within 90 days. Scores below 12: include in the next AppSec cycle.

The integration points most teams miss:

Your existing red team needs to include data science/analysis resources —  and AI red teaming requires new test cases inside the AppSec program that already exists. The first step is adding LLM-specific test case categories to your existing pentest scope — not standing up a parallel program.

EU AI Act: What Your Security Team Actually Needs to Do — and When

The EU AI Act is live. Most enterprise security teams have delegated it entirely to legal and compliance. That’s a mistake — because a significant portion of the Act’s requirements land squarely in security’s operational domain.

Here is the timeline that matters for security leaders:

Now — Already in Effect
Prohibited AI practices are banned: social scoring systems, real-time biometric surveillance in public spaces (with narrow exceptions), subliminal manipulation systems. If your organization operates in the EU or processes EU data, these prohibitions apply now. Security should verify no deployed systems fall into prohibited categories.

August 2026 — High-Risk AI System Requirements Take Effect
High-risk AI systems — defined as AI used in critical infrastructure, employment decisions, access to essential services, law enforcement, and several other categories — must meet requirements including:

  • Risk management systems documented and maintained
  • Data governance controls over training and operational data
  • Technical documentation sufficient for regulatory audit
  • Human oversight mechanisms that actually function
  • Accuracy, robustness, and cybersecurity standards

What security owns in this list: cybersecurity standards for high-risk systems, robustness testing (which maps directly to AI red teaming), access controls over training data, and audit logging of system inputs and outputs.

The practical starting point:
Inventory your AI deployments against the EU AI Act’s high-risk category definitions. Any system that qualifies needs a security requirements review against the Act’s cybersecurity provisions before August. If you’re starting that review now, you have time. If you’re waiting for Q3, you don’t.

Integrating AI Security into AppSec and GRC Workflows

The most common mistake in enterprise AI security programs is treating AI security as a standalone discipline. It isn’t. It’s an extension of the disciplines that already exist.

AppSec integration:
Add LLM-specific test cases to your existing web application testing program for any application with an AI component. Update your threat modeling templates to include “model as threat surface” as a standard architecture review element. Require AI red team results before production deployment for any high-autonomy system — the same gate you use for application security sign-off.

GRC integration:
Map AI security controls to your existing control framework — whether that’s NIST CSF, ISO 27001, SOC 2, or a custom framework. AI-specific controls are additive, not separate. Your risk register should include AI system risk entries scored the same way as any other technology risk. Your vendor risk assessment process should require AI security documentation from any vendor deploying AI in your environment.

The board reporting layer:
Quarterly AI risk posture updates should be a standing board agenda item by the end of 2026. The format: system inventory and risk scores, red team findings and remediation status, regulatory compliance posture (EU AI Act, emerging US state requirements), and the detection gap assessment — what AI security incidents would your current monitoring catch, and what would it miss.

If the board doesn’t have a standing AI risk update, that’s the gap to close first.

This Week’s Resources

Published this week:

→ [LinkedIn Post] Your red team is testing your network. Nobody is testing your LLMs.

→ [LinkedIn Article] AI Red Teaming: The CISO’s Framework for Adversarial Testing of Enterprise AI Systems — the scoping model, the five attack vectors, the board reporting format. Full long-form framework — built to share with your AppSec lead and AI engineering team

→ [LinkedIn Video] LLM Prompt Injection: Three enterprise scenarios, 90 seconds — share with teams who need the concept explained without the technical depth

The Question I’m Sitting With This Week

Every enterprise security leader we speak with is managing the same tension: AI adoption is moving faster than the governance program can keep up with. The business wants speed. Security wants control. Both are right.

The resolution isn’t slowing down AI adoption. It’s building the governance layer fast enough to stay close to it.

The organizations that are doing this well aren’t waiting for the perfect policy or the complete tool stack. They’re starting with inventory, adding test cases to existing programs, and building the board reporting layer before the first incident forces it.

The ones who are struggling are waiting for a centralized AI governance function that hasn’t been funded yet — while AI systems accumulate in production, ungoverned and untested.

The gap between those two groups will be measurable by the end of 2026. The question is which side of it your program is on.

We can help

If you want to find out more detail, we're happy to help. Just give us your business email so that we can start a conversation.

Thanks, we'll be in touch!

Subscribe

Join our mailing list to receive the latest announcements and offers.

You have Successfully Subscribed!

Stay in the know!

Keep informed of new speakers, topics, and activities as they are added. By registering now you are not making a firm commitment to attend.

Congrats! We'll be sending you updates on the progress of the conference.