March 2026 | Michael Rishi Forrester


I’ve spent the last 30 years watching enterprises adopt new technology. Mainframes to client-server. On-prem to cloud. Monoliths to microservices. Every wave has the same arc: early enthusiasm, chaotic adoption, scary incidents, and then, eventually, grown-up governance.

AI is somewhere between phases two and three right now. And after spending the last several months helping clients figure out how to actually integrate AI into their operations without burning the house down, I can tell you: the conversation most organizations are having about AI safety is the wrong conversation.

They’re asking “is AI safe?” when they should be asking “do we even know what our employees are doing with AI right now?”

The answer, almost universally, is no.

The Shadow AI Problem Is Worse Than You Think

Here’s a number that should keep every CISO up at night: the average enterprise currently has roughly 1,200 unofficial AI applications running inside it.1 Not sanctioned. Not monitored. Not governed. Just people doing their jobs with whatever tools they find useful.

Fifty-eight percent of employees use AI productivity tools daily.2 Only 18.5% are aware their company even has an AI policy.3 And only 28% of organizations have written a formal one.4

This isn’t a security problem. It’s an organizational awareness problem wearing a security costume.

When I sit down with a client’s leadership team, I don’t start with frameworks or compliance checklists. I start with a question: Do you know how many AI tools are running inside your company right now? The silence that follows tells me everything.

KPMG calls it shadow AI. Gartner predicts that through 2026, at least 80% of unauthorized AI transactions will come from internal policy violations, not external attackers.5 When shadow AI does lead to a breach, it costs an average of $670,000 more than a comparable incident.6 The threat isn’t some sophisticated nation-state attack. It’s Karen in accounting pasting customer financial data into ChatGPT because it helps her build pivot tables faster.

What the Frameworks Actually Say (And What They Don’t)

If you’re an architect or a leader trying to get your arms around AI governance, you’re drowning in frameworks right now. NIST AI RMF. ISO 42001. EU AI Act. OWASP Top 10 for LLMs. The Cloud Security Alliance’s AI Controls Matrix with its 243 control objectives across 18 domains.7 Singapore just published the world’s first governance framework specifically for agentic AI systems.8

That’s a lot of paper.

Here’s what actually matters if you’re building a governance program today:

NIST AI RMF is the operational backbone in the US. It’s what the Colorado AI Act references for safe harbor protection, and multinationals use it as the layer beneath regulatory compliance. NIST also dropped a preliminary draft of the Cyber AI Profile (IR 8596) in December 2025, which maps AI risks onto the Cybersecurity Framework 2.0 structure.9 If you’re only going to read one thing, read that.

ISO 42001 went from “nice to have” to table stakes in about six months. Microsoft’s supplier program now mandates it for AI systems handling sensitive data.10 Cornerstone OnDemand, Hudson Talent Solutions, Greenhouse, and Maven AGI all certified in Q1 2026.11 Seventy-six percent of companies plan to pursue an AI audit or certification within the next two years.12 If you sell to enterprises, this is coming for you whether you like it or not.

The EU AI Act is the one with teeth. Prohibited practices became enforceable in February 2025. GPAI model obligations kicked in August 2025. The big date is August 2, 2026, when high-risk AI system requirements become fully enforceable, with fines up to €35 million or 7% of global turnover.13 Five months from now. If you have European customers or European employees and you’re not already in motion, you’re late.

OWASP Top 10 for LLMs 2025 added two new categories worth knowing: System Prompt Leakage and Vector/Embedding Weaknesses (the latter targeting RAG systems).14 Prompt injection is still number one. It will probably always be number one.

What none of these frameworks will tell you is how to get 50,000 employees to stop pasting proprietary data into consumer AI tools by next Tuesday. That’s on you.

The Architecture That’s Actually Emerging

After talking to dozens of teams building AI integrations, a pattern has solidified. It’s not revolutionary. It’s just what works.

The AI gateway is the centerpiece. Think of it as a proxy layer between your applications and every AI model provider you use. All LLM traffic routes through it. Authentication, content policies, PII redaction, cost controls, rate limiting, audit logging, all in one place. TrueFoundry, Portkey, and Kong all offer commercial versions. LiteLLM is a solid open-source option. Microsoft pushes Azure API Management for this role.

Behind the gateway, you layer guardrails. AWS Bedrock Guardrails can filter across six harm categories and recently added automated reasoning checks that use formal mathematical logic, claiming 99% factual accuracy verification.15 NVIDIA’s NeMo Guardrails provides open-source microservices for content safety and jailbreak detection trained on 17,000 known attacks.16 Guardrails AI offers community-contributed validators with 5,900 GitHub stars.

For prompt injection specifically, and I need to be honest here, there is no complete fix. OpenAI acknowledged this when they launched Lockdown Mode for ChatGPT in February 2026. Attack success rates hit 50-84% depending on configuration.17 Critical CVEs hit Microsoft Copilot (CVSS 9.3), GitHub Copilot (CVSS 9.6), and Cursor IDE (CVSS 9.8) in 2025.18 The best you can do is defense-in-depth: input validation, output scanning, privilege minimization for agent tool access, and behavioral monitoring. Infrastructure-level enforcement, not tool-level safety features. (This is the thesis of my Eight Guardrails framework: guardrails enforced by the AI tool itself are bypassable; guardrails enforced by infrastructure are not.)

For RAG systems, you need governance across three layers: what goes into your knowledge base, what gets retrieved and injected into prompts, and what comes out the other side. Document-level access control enforcement during retrieval is critical. If your RAG system doesn’t respect the same permissions as your document management system, you’ve just built a compliance bypass tool and called it a productivity feature.

The AI Assistant Wars, From a Security Perspective

Every enterprise I work with is running at least two, usually three AI platforms simultaneously. Here’s where things stand:

Microsoft 365 Copilot has a deployment problem that nobody at Microsoft wants to talk about publicly. Only 6% of organizations moved from pilot to production according to Gartner.19 Sixty percent are stuck in pilot purgatory. The core issue isn’t the technology. It’s that Copilot accesses anything available to the user across SharePoint, Teams, OneDrive, and Exchange. Suddenly, years of sloppy permission management becomes a data exposure issue. AI-related data security incidents jumped from 27% to 40% between 2023 and 2024.20

Claude is quietly winning the enterprise market. Thirty-two percent of enterprise LLM workloads now run on Claude models, overtaking OpenAI’s 25% (which dropped from 50% two years ago).21 Claude is particularly strong in healthcare, finance, and legal, sectors where Anthropic’s safety-first positioning and contractual data protections matter. As of January 2026, Anthropic is even integrated as a subprocessor in Microsoft 365 Copilot.22

ChatGPT Enterprise still has the largest user base with over 5 million paying business clients.23 It’s the consumer default that enterprises grudgingly formalize.

The multi-model reality means your governance approach can’t be vendor-specific. You need policies and controls that work across all of them, which brings us back to the gateway architecture.

The US Regulatory Situation Is a Mess

I’m not going to sugarcoat this: if you’re trying to plan a compliance strategy around US federal AI regulation, good luck.

The Trump administration revoked Biden’s EO 14110 on day one and replaced it with EO 14179, “Removing Barriers to American Leadership in Artificial Intelligence."24 The OMB rescinded Biden’s risk management memo and replaced it with M-25-21, which dropped the categories of “rights-impacting” and “safety-impacting” AI in favor of “high-impact” AI.25 The new memo notably does not reference the NIST AI RMF at all.

Then came the December 2025 executive order on federal AI preemption, which directed the Attorney General to create an AI Litigation Task Force specifically to challenge state AI laws.26 It conditions broadband funding eligibility on states not having “onerous AI laws.” Whether an executive order can actually preempt state legislation without Congress is a live legal question that’s almost certainly heading to court.

Meanwhile, the Colorado AI Act (the first broad state AI law) got delayed from February to June 30, 2026.27 Over 1,000 AI-related bills were introduced across states in 2025.28 The FY2026 NDAA directs DoD to build an AI security framework for defense contractors, essentially a “CMMC for AI."29 The NDAA also banned DeepSeek from all DoD contracts and systems.

What this means practically: plan for the EU AI Act as your compliance floor. It’s the only regulation with clear deadlines, real enforcement mechanisms, and extraterritorial reach. US federal regulation is moving toward deregulation. State regulation is under legal threat. The EU is the only jurisdiction where you can actually build a compliance roadmap with confidence.

For government work specifically: Microsoft achieved FedRAMP High for its full AI suite in December 2025.30 AWS Bedrock holds FedRAMP High and DoD IL5. Anthropic’s Claude reached FedRAMP High through both AWS and Google Cloud. OpenAI got prioritized FedRAMP 20x authorization.31 If you’re selling to federal agencies, FedRAMP is the gatekeeper.

Privacy: Two Tiers, One Problem

Here’s a development from late 2025 that didn’t get enough attention: OpenAI, Anthropic, and Google all quietly shifted to opt-out training data models for their consumer plans.32 If your employees are using free or personal-tier AI accounts for work, their inputs may be training the next model generation.

Enterprise tiers maintain no-training guarantees. ChatGPT Enterprise, Claude for Work, Azure OpenAI, and AWS Bedrock all contractually commit to not using customer data for training. But the gap between “enterprise plan” and “person’s personal account they use at work” is where data leaks.

This is why the AI gateway matters for data protection too. Proxying all outbound AI traffic through a gateway that can intercept and redact PII before it leaves your network boundary is becoming standard practice. Tools like Nightfall AI, Microsoft Purview DSPM for AI, and Lakera Guard handle real-time scanning and redaction. AWS Bedrock Guardrails can automatically mask 16+ types of PII.33

Despite all this tooling, 78% of organizations still can’t validate what data is entering AI training pipelines.34 Fifteen percent of employees have pasted sensitive data into public LLMs.35 The technology exists to prevent this. The organizational discipline to deploy it universally does not.

For regulated industries, the stakes are even higher. The proposed HIPAA Security Rule update, the first major revision in 20 years, would explicitly protect ePHI used in AI training data.36 The average healthcare data breach costs $10.9 million.37 On the GDPR front, OpenAI was fined €15 million by Italy’s DPA for training on personal data without adequate legal basis.38

The $2 Billion That Changed the Vendor Market Overnight

September 2025 was the month every major cybersecurity platform swallowed an AI security startup whole:

  • Cisco acquired Robust Intelligence (~$400M), creating Cisco AI Defense39
  • Palo Alto Networks acquired Protect AI (~$500M+), building Prisma AIRS40
  • Check Point acquired Lakera (~$300M)41
  • F5 acquired CalypsoAI ($180M), launching F5 AI Guardrails42
  • CrowdStrike acquired Pangea (~$260M)43
  • Cato Networks acquired Aim Security43

Two billion dollars in AI security acquisitions in one quarter. The standalone AI security startup category essentially got absorbed into the existing cybersecurity platform vendors. If you already have a relationship with Cisco, Palo Alto, or CrowdStrike, your AI security story now lives inside their platform. If you need independent tooling, HiddenLayer (model security, partnered with Microsoft), Arthur AI (agentic discovery and governance), Guardrails AI (open-source output validation), and Patronus AI (hallucination detection) are the remaining independents worth evaluating.

What CISOs Are Actually Worried About

I talk to security leaders regularly. Here’s the hierarchy of concerns that shows up in every conversation, backed by data from Splunk, Saviynt, Team8, and Proofpoint surveys covering thousands of CISOs:44

Data leakage via AI systems. 76%+ are worried. This is the number one concern, full stop.

Shadow AI. 90% are concerned about privacy and security implications of unsanctioned AI use.

Hallucination causing business harm. 83% worry about this, and with good reason. In the legal profession alone, hallucination incidents went from “two per week” to “two to three cases per day” by spring 2025. Over 1,005 court cases involving AI-hallucinated content have been tracked globally. Sixty-six lawyers were sanctioned for submitting fabricated citations in a single year.45

Personal liability. 76% of CISOs now worry about being personally on the hook for AI-related security incidents.44 This is new and significant.

The incidents justifying these concerns are not theoretical. In November 2025, Anthropic disclosed what appears to be the first documented large-scale cyberattack executed primarily by AI. Chinese state-sponsored actors used jailbroken Claude Code to hit approximately 30 organizations.46 GitHub Copilot had a CVSS 9.6 prompt injection vulnerability where code comments could trigger remote code execution on over 100,000 developer machines.18 Microsoft 365 Copilot experienced a zero-click data exfiltration attack via crafted emails.20

These are not edge cases. These are mainstream tools being exploited in production.

The Insurance Gap Nobody Wants to Discuss

Cyber insurance is roughly a $16 billion market projected to hit $40 billion by 2030.47 AI risks currently sit in what the industry calls “silent coverage,” implicitly included under existing policies without explicit terms, similar to how cyber risk was treated fifteen years ago.

Coalition launched deepfake-specific coverage in December 2025. Embroker has drafted explicit AI endorsements for professional liability.48 But a growing number of insurers are adding broad AI-specific exclusions to E&O, D&O, and cyber policies. If your organization is deploying AI at scale and you haven’t reviewed your insurance coverage specifically for AI-related liabilities, you have a blind spot.

On the IP front, over 151 notable copyright suits are pending against AI platforms in the US.49 A February 2026 federal court ruling held that user communications with Claude are not protected by attorney-client privilege, meaning AI inputs may be discoverable in litigation.

So What Should You Actually Do?

I’ve distilled this down to the minimum viable action list for an enterprise that’s serious about AI adoption but hasn’t yet built governance infrastructure. In priority order:

1. Find out what’s already running. Before you build anything, do a shadow AI discovery sweep. You cannot govern what you cannot see. Network monitoring tools, CASB logs, procurement records, whatever it takes to build an inventory of every AI tool touching your environment.

2. Deploy an AI gateway. Route all sanctioned AI traffic through a centralized proxy. This gives you authentication, audit logging, cost visibility, and a policy enforcement point. Do this before you expand AI usage, not after.

3. Write an acceptable use policy and make sure people actually know about it. Only 18.5% of employees know their company’s AI policy exists.3 The policy itself doesn’t need to be 50 pages. It needs to clearly state what tools are approved, what data classifications can be sent to AI services, and what human review is required before AI outputs are used in decisions or client-facing work.

4. Negotiate your vendor contracts properly. “No Training” clauses that prevent use of your data for model improvement. Zero Data Retention options where available. Data residency commitments. Model portability and exit strategy documentation. Ninety-two percent of AI vendors claim broad data usage rights.50 Read the fine print.

5. Start ISO 42001 preparation alongside your existing ISO 27001 program. The market is moving toward requiring it. Getting ahead of the wave is cheaper and less painful than scrambling when a major customer or regulator demands it.

6. Plan for the EU AI Act deadline of August 2, 2026. If you have any European exposure (customers, employees, data subjects) you need a compliance plan. Five months is tight for organizations starting from zero.

7. Build agentic AI governance now. Autonomous AI agents that can take actions, call tools, and execute multi-step workflows are the next wave. The security model for agents is a different animal than chat-based AI. If you’re deploying agents or planning to, establish governance guardrails before they go to production, not after something breaks.

The Honest Assessment

Here’s the part where I level with you. Less than 1% of enterprises have what anyone would call a mature AI governance program.51 Only 29% feel prepared to defend against AI-related threats.52 Only 37% conduct regular AI risk assessments.52 And the capability gap between AI adoption speed and governance readiness is widening, not closing.

The organizations getting this right are spending approximately 0.5-1% of their AI technology budget on governance infrastructure.53 They’re treating governance as a delivery accelerator, the thing that lets them move faster because they have guardrails, not as a brake on innovation. The data suggests they’re seeing 300-2,000% ROI on their AI investments while avoiding the incidents that cost unprepared organizations $4.8 million per breach on average.54

The window for getting governance right before something forces your hand is closing. The EU AI Act enforcement deadline is five months out. State AI laws are proliferating despite federal pushback. The insurance market is hardening. And the AI agents being deployed today are more autonomous, more capable, and more dangerous when misconfigured than anything we’ve dealt with before.

Technology changes. Human challenges don’t. The challenge right now isn’t whether AI works. It does. The challenge is whether your organization can adopt it without creating risks you don’t understand, can’t see, and aren’t insured for.

I’ve been through enough platform shifts to know how this plays out. The organizations that invest in governance early come out ahead. The ones that wait for an incident to force their hand pay a lot more, in money, in reputation, and in trust.

Build the guardrails now. You’ll thank yourself later.


Michael Rishi Forrester is a Principal Training Architect with 30 years of experience helping organizations adopt technology safely. He has trained over 1 million engineers across platforms including KodeKloud, Coursera, O’Reilly, and YouTube. His Eight Guardrails framework for AI agent safety in Kubernetes environments was published in February 2026.


Statistics Appendix

1 Knostic, “Detect and Control: Shadow AI in the Enterprise” (2025). Estimate based on enterprise shadow AI discovery audits. www.knostic.ai/blog/shad…

2 Invicti, “Shadow AI: Risks, Challenges, and Solutions in 2025.” Employee AI usage rates from enterprise surveys. www.invicti.com/blog/web-…

3 Invicti, ibid. Only 18.5% of employees surveyed were aware of their employer’s AI acceptable use policy.

4 ISACA, “Artificial Intelligence Acceptable Use Policy Template” (2025). Based on ISACA member survey data on formal AI policy adoption. www.isaca.org/resources…

5 Gartner, prediction cited in KPMG, “Shadow AI Is Already Here: Take Control, Reduce Risk, and Unleash Innovation” (2025). kpmg.com/kpmg-us/c…

6 KPMG, ibid. Incremental cost of shadow AI-related breaches versus baseline security incidents.

7 Cloud Security Alliance, AI Controls Matrix (July 2025). 243 control objectives across 18 security domains for AI systems. cloudsecurityalliance.org

8 Computer Weekly, “Singapore debuts world’s first governance framework for agentic AI” (January 2026). Published by Singapore’s IMDA and AI Verify Foundation. www.computerweekly.com/news/3666…

9 NIST, Internal Report IR 8596 (Initial Public Draft, December 2025). “Cybersecurity Framework Profile for AI Systems.” nvlpubs.nist.gov/nistpubs/… NCCoE project page: www.nccoe.nist.gov/projects/…

10 A-LIGN, “ISO 42001 Certification” (2025). Microsoft SSPA v10 program requirements for AI suppliers. www.a-lign.com/service/i…

11 Individual press releases: Cornerstone OnDemand (https://www.cornerstoneondemand.com), Hudson Talent Solutions (GlobeNewswire, February 24, 2026), Greenhouse (PR Newswire, March 2026), Maven AGI (PR Newswire, February 2026).

12 A-LIGN, 2025 Benchmark Report. 76% of surveyed companies plan AI audit or certification within 24 months. www.a-lign.com/service/i…

13 European Commission, “AI Act: Shaping Europe’s Digital Future.” Enforcement timeline and penalty structure. digital-strategy.ec.europa.eu/en/polici… Implementation timeline: ai-act-service-desk.ec.europa.eu/en/ai-act…

14 Confident AI, “OWASP Top 10 2025 for LLM Applications: What’s New?” (2025). LLM07 (System Prompt Leakage) and LLM08 (Vector and Embedding Weaknesses) added. www.confident-ai.com/blog/owas…

15 AWS, Amazon Bedrock Guardrails product page. Automated reasoning checks for factual accuracy. aws.amazon.com/bedrock/g…

16 VentureBeat, “Nvidia tackles agentic AI safety and security with new NeMo Guardrails NIMs” (2025). Training dataset of 17,000+ known jailbreak attacks. venturebeat.com/ai/nvidia…

17 Vectra AI, “Prompt injection: types, real-world CVEs, and enterprise defenses.” Attack success rates of 50-84% across configurations. www.vectra.ai/topics/pr… Obsidian Security, “Prompt Injection Attacks: The Most Common AI Exploit in 2025.” www.obsidiansecurity.com/blog/prom…

18 Vectra AI, ibid. CVE details for Microsoft Copilot (CVSS 9.3), GitHub Copilot (CVSS 9.6), Cursor IDE (CVSS 9.8).

19 Gartner, cited in Adoptify AI, “2026 Microsoft Copilot Governance Framework: Executive Guide.” 6% pilot-to-production conversion rate, 60% stuck in pilots. www.adoptify.ai/blogs/202…

20 Knostic, “Microsoft Copilot data security and governance: A practical guide for CISOs.” AI data security incidents rising from 27% to 40%. www.knostic.ai/blog/micr…

21 Data Studios, “Claude is preferred by enterprises, ChatGPT by employees: how generative AI choices are changing within companies in 2025.” Claude at 32% of enterprise workloads, OpenAI at 25% (down from 50%). www.datastudios.org/post/clau…

22 Microsoft Learn, “Anthropic as a subprocessor for Microsoft Online Services” (January 2026). learn.microsoft.com/en-us/cop…

23 Data Studios, ibid. ChatGPT Enterprise at 5M+ paying business clients.

24 Squire Patton Boggs, “Key Insights on President Trump’s New AI Executive Order and Policy Regulatory Implications.” EO 14179 replacing EO 14110. www.squirepattonboggs.com/insights/…

25 Hunton Andrews Kurth, “OMB Issues Revised Policies on AI Use and Procurement by Federal Agencies.” M-25-21 replacing M-24-10. www.hunton.com/privacy-a… Wiley, “Trump Administration Revamps Guidance on Federal Use and Procurement of AI.” www.wiley.law/alert-Tru…

26 Sidley Austin, “Unpacking the December 11, 2025 Executive Order: Ensuring a National Policy Framework for Artificial Intelligence.” AI Litigation Task Force and state preemption provisions. www.sidley.com/en/insigh…

27 Hudson Cook, “Colorado Special Session Update: AI Law Delayed to June 2026.” www.hudsoncook.com/article/c… Epstein Becker Green, “Colorado’s Historic AI Law Survives Without Delay (So Far).” www.workforcebulletin.com/colorados…

28 Credo AI, “Latest AI Regulations Update: What Enterprises Need to Know in 2026.” 1,000+ state-level AI bills in 2025. www.credo.ai/blog/late…

29 Crowell & Moring, “CMMC for AI? Defense Policy Law Imposes AI Security Framework and Requirements on Contractors.” FY2026 NDAA provisions. www.crowell.com/en/insigh…

30 FinancialContent, “Microsoft Confirms All AI Services Meet FedRAMP High Security Standards” (December 30, 2025). markets.financialcontent.com/wral/arti…

31 GSA, “GSA and FedRAMP Announce Major Initiative: Prioritizing 20x Authorizations for AI Cloud Solutions” (August 25, 2025). www.gsa.gov/about-us/… FedScoop, “ChatGPT gets one step closer to widespread government use.” fedscoop.com/chatgpt-g…

32 TV News Check, “OpenAI, Google & Anthropic All Just Quietly Backtracked User Privacy Settings.” tvnewscheck.com/business/… Shelly Palmer, “Anthropic’s Privacy Pivot: Users Must Opt-Out by September 28” (August 2025). shellypalmer.com/2025/08/a…

33 AWS, Amazon Bedrock Guardrails. PII masking for 16+ data types. aws.amazon.com/bedrock/g…

34 Protecto AI, “AI Data Privacy Statistics & Trends 2025.” 78% of organizations unable to validate data in AI training pipelines. www.protecto.ai/blog/ai-d…

35 Protecto AI, ibid.; Lakera, “Data Loss Prevention (DLP): A Complete Guide for the GenAI Era.” 15% of employees have pasted sensitive data into public LLMs. www.lakera.ai/blog/data…

36 HIPAA Journal, “When AI Technology and HIPAA Collide.” Proposed Security Rule update covering ePHI in AI training data. www.hipaajournal.com/when-ai-t…

37 HIPAA Journal, ibid. $10.9M average cost of healthcare data breach (IBM/Ponemon 2025 data).

38 SecurePrivacy, “EU AI Act 2026 Compliance Guide.” Italian DPA Garante €15M fine against OpenAI. secureprivacy.ai/blog/eu-a…

39 Calcalist Tech, “Inside Yaron Singer’s surprising $400M sale to Cisco” (2025). www.calcalistech.com/ctechnews… Cisco, “Robust Intelligence Is Now Part of Cisco.” www.cisco.com/site/us/e…

40 Palo Alto Networks, “Palo Alto Networks Completes Acquisition of Protect AI” (2025). www.paloaltonetworks.com/company/p… GeekWire, “Palo Alto Networks to acquire Seattle cybersecurity startup Protect AI.” www.geekwire.com/2025/palo…

41 ChannelE2E, “Check Point Acquires Lakera to Build Full AI Security Stack.” www.channele2e.com/news/chec…

42 F5, “F5 to acquire CalypsoAI to bring advanced AI guardrails to large enterprises.” www.f5.com/company/n…

43 SecurityWeek, “Cybersecurity M&A Roundup: 40 Deals Announced in September 2025.” CrowdStrike/Pangea and Cato/Aim Security. www.securityweek.com/cybersecu… Infosecurity Magazine, “Cybersecurity M&A Roundup: CrowdStrike, SentinelOne and Check Point In.” www.infosecurity-magazine.com/news-feat…

44 CISO survey data aggregated from: Splunk/Cisco 2025 CISO Report (650 CISOs), Saviynt State of Identity Security Survey (235 CISOs), Team8 2025 CISO Village Survey (110+ CISOs), Proofpoint 2025 Voice of the CISO Report (1,600 CISOs globally). Percentages represent cross-survey averages where data overlaps.

45 Cronkite News / Arizona PBS, “As more lawyers fall for AI hallucinations, ChatGPT says: Check my work” (October 28, 2025). 1,005+ tracked cases, 66 sanctions, incident frequency increase. cronkitenews.azpbs.org/2025/10/2…

46 Anthropic threat intelligence disclosure, GTG-1002 (November 2025). Chinese state-sponsored actors using jailbroken Claude Code against approximately 30 organizations.

47 WTW, “Cyber risk: A look ahead to 2026” (February 2026). $16B current market, $40B projection by 2030. www.wtwco.com/en-us/ins… WTW, “Insuring the AI Age” (December 2025). www.wtwco.com/en-us/ins…

48 Insurance Business America, “Cyber insurance enters the AI risk era as limits, wording and underwriting models shift.” Coalition deepfake coverage and Embroker AI endorsements. www.insurancebusinessmag.com/us/news/c…

49 Ropes & Gray, “An End-of-Year Update to the Current State of AI Related Copyright Litigation” (December 2024, updated 2025). 151+ notable pending suits. www.ropesgray.com/en/insigh…

50 Data & Trusted AI Alliance, AI Vendor Assessment Framework (October 2025). 92% of AI vendors claim broad data usage rights versus 63% market average. Referenced in vendor evaluation research from Netguru and Pertama Partners.

51 Liminal, “Enterprise AI Governance: Complete Implementation Guide (2025).” Less than 1% of enterprises with mature AI governance programs despite 78% using AI. www.liminal.ai/blog/ente…

52 Akto, “State of Agentic AI Security 2025: Adoption, Risks & CISO Insights.” 29% prepared to defend against AI threats, 37% conduct regular AI risk assessments. www.akto.io/blog/stat…

53 SecurePrivacy, “AI Governance: Enterprise Compliance & Risk Management Guide (2026).” Governance spend benchmarks of 0.5-1% of AI technology investment. secureprivacy.ai/blog/ai-g…

54 IBM, Cost of a Data Breach Report 2025. $4.8M global average cost per data breach. Widely cited across enterprise security literature.