I Just Helped an Enterprise Adopt AI. Here's What Nobody's Talking About.
March 2026 | Michael Rishi Forrester
I’ve spent the last 30 years watching enterprises adopt new technology. Mainframes to client-server. On-prem to cloud. Monoliths to microservices. Every wave has the same arc: early enthusiasm, chaotic adoption, scary incidents, and then, eventually, grown-up governance.
AI is somewhere between phases two and three right now. And after spending the last several months helping clients figure out how to actually integrate AI into their operations without burning the house down, I can tell you: the conversation most organizations are having about AI safety is the wrong conversation.
They’re asking “is AI safe?” when they should be asking “do we even know what our employees are doing with AI right now?”
The answer, almost universally, is no.
The Shadow AI Problem Is Worse Than You Think
Here’s a number that should keep every CISO up at night: the average enterprise currently has roughly 1,200 unofficial AI applications running inside it.1 Not sanctioned. Not monitored. Not governed. Just people doing their jobs with whatever tools they find useful.
Fifty-eight percent of employees use AI productivity tools daily.2 Only 18.5% are aware their company even has an AI policy.3 And only 28% of organizations have written a formal one.4
This isn’t a security problem. It’s an organizational awareness problem wearing a security costume.
When I sit down with a client’s leadership team, I don’t start with frameworks or compliance checklists. I start with a question: Do you know how many AI tools are running inside your company right now? The silence that follows tells me everything.
KPMG calls it shadow AI. Gartner predicts that through 2026, at least 80% of unauthorized AI transactions will come from internal policy violations, not external attackers.5 When shadow AI does lead to a breach, it costs an average of $670,000 more than a comparable incident.6 The threat isn’t some sophisticated nation-state attack. It’s Karen in accounting pasting customer financial data into ChatGPT because it helps her build pivot tables faster.
What the Frameworks Actually Say (And What They Don’t)
If you’re an architect or a leader trying to get your arms around AI governance, you’re drowning in frameworks right now. NIST AI RMF. ISO 42001. EU AI Act. OWASP Top 10 for LLMs. The Cloud Security Alliance’s AI Controls Matrix with its 243 control objectives across 18 domains.7 Singapore just published the world’s first governance framework specifically for agentic AI systems.8
That’s a lot of paper.
Here’s what actually matters if you’re building a governance program today:
NIST AI RMF is the operational backbone in the US. It’s what the Colorado AI Act references for safe harbor protection, and multinationals use it as the layer beneath regulatory compliance. NIST also dropped a preliminary draft of the Cyber AI Profile (IR 8596) in December 2025, which maps AI risks onto the Cybersecurity Framework 2.0 structure.9 If you’re only going to read one thing, read that.
ISO 42001 went from “nice to have” to table stakes in about six months. Microsoft’s supplier program now mandates it for AI systems handling sensitive data.10 Cornerstone OnDemand, Hudson Talent Solutions, Greenhouse, and Maven AGI all certified in Q1 2026.11 Seventy-six percent of companies plan to pursue an AI audit or certification within the next two years.12 If you sell to enterprises, this is coming for you whether you like it or not.
The EU AI Act is the one with teeth. Prohibited practices became enforceable in February 2025. GPAI model obligations kicked in August 2025. The big date is August 2, 2026, when high-risk AI system requirements become fully enforceable, with fines up to €35 million or 7% of global turnover.13 Five months from now. If you have European customers or European employees and you’re not already in motion, you’re late.
OWASP Top 10 for LLMs 2025 added two new categories worth knowing: System Prompt Leakage and Vector/Embedding Weaknesses (the latter targeting RAG systems).14 Prompt injection is still number one. It will probably always be number one.
What none of these frameworks will tell you is how to get 50,000 employees to stop pasting proprietary data into consumer AI tools by next Tuesday. That’s on you.
The Architecture That’s Actually Emerging
After talking to dozens of teams building AI integrations, a pattern has solidified. It’s not revolutionary. It’s just what works.
The AI gateway is the centerpiece. Think of it as a proxy layer between your applications and every AI model provider you use. All LLM traffic routes through it. Authentication, content policies, PII redaction, cost controls, rate limiting, audit logging, all in one place. TrueFoundry, Portkey, and Kong all offer commercial versions. LiteLLM is a solid open-source option. Microsoft pushes Azure API Management for this role.
Behind the gateway, you layer guardrails. AWS Bedrock Guardrails can filter across six harm categories and recently added automated reasoning checks that use formal mathematical logic, claiming 99% factual accuracy verification.15 NVIDIA’s NeMo Guardrails provides open-source microservices for content safety and jailbreak detection trained on 17,000 known attacks.16 Guardrails AI offers community-contributed validators with 5,900 GitHub stars.
For prompt injection specifically, and I need to be honest here, there is no complete fix. OpenAI acknowledged this when they launched Lockdown Mode for ChatGPT in February 2026. Attack success rates hit 50-84% depending on configuration.17 Critical CVEs hit Microsoft Copilot (CVSS 9.3), GitHub Copilot (CVSS 9.6), and Cursor IDE (CVSS 9.8) in 2025.18 The best you can do is defense-in-depth: input validation, output scanning, privilege minimization for agent tool access, and behavioral monitoring. Infrastructure-level enforcement, not tool-level safety features. (This is the thesis of my Eight Guardrails framework: guardrails enforced by the AI tool itself are bypassable; guardrails enforced by infrastructure are not.)
For RAG systems, you need governance across three layers: what goes into your knowledge base, what gets retrieved and injected into prompts, and what comes out the other side. Document-level access control enforcement during retrieval is critical. If your RAG system doesn’t respect the same permissions as your document management system, you’ve just built a compliance bypass tool and called it a productivity feature.
The AI Assistant Wars, From a Security Perspective
Every enterprise I work with is running at least two, usually three AI platforms simultaneously. Here’s where things stand:
Microsoft 365 Copilot has a deployment problem that nobody at Microsoft wants to talk about publicly. Only 6% of organizations moved from pilot to production according to Gartner.19 Sixty percent are stuck in pilot purgatory. The core issue isn’t the technology. It’s that Copilot accesses anything available to the user across SharePoint, Teams, OneDrive, and Exchange. Suddenly, years of sloppy permission management becomes a data exposure issue. AI-related data security incidents jumped from 27% to 40% between 2023 and 2024.20
Claude is quietly winning the enterprise market. Thirty-two percent of enterprise LLM workloads now run on Claude models, overtaking OpenAI’s 25% (which dropped from 50% two years ago).21 Claude is particularly strong in healthcare, finance, and legal, sectors where Anthropic’s safety-first positioning and contractual data protections matter. As of January 2026, Anthropic is even integrated as a subprocessor in Microsoft 365 Copilot.22
ChatGPT Enterprise still has the largest user base with over 5 million paying business clients.23 It’s the consumer default that enterprises grudgingly formalize.
The multi-model reality means your governance approach can’t be vendor-specific. You need policies and controls that work across all of them, which brings us back to the gateway architecture.
The US Regulatory Situation Is a Mess
I’m not going to sugarcoat this: if you’re trying to plan a compliance strategy around US federal AI regulation, good luck.
The Trump administration revoked Biden’s EO 14110 on day one and replaced it with EO 14179, “Removing Barriers to American Leadership in Artificial Intelligence."24 The OMB rescinded Biden’s risk management memo and replaced it with M-25-21, which dropped the categories of “rights-impacting” and “safety-impacting” AI in favor of “high-impact” AI.25 The new memo notably does not reference the NIST AI RMF at all.
Then came the December 2025 executive order on federal AI preemption, which directed the Attorney General to create an AI Litigation Task Force specifically to challenge state AI laws.26 It conditions broadband funding eligibility on states not having “onerous AI laws.” Whether an executive order can actually preempt state legislation without Congress is a live legal question that’s almost certainly heading to court.
Meanwhile, the Colorado AI Act (the first broad state AI law) got delayed from February to June 30, 2026.27 Over 1,000 AI-related bills were introduced across states in 2025.28 The FY2026 NDAA directs DoD to build an AI security framework for defense contractors, essentially a “CMMC for AI."29 The NDAA also banned DeepSeek from all DoD contracts and systems.
What this means practically: plan for the EU AI Act as your compliance floor. It’s the only regulation with clear deadlines, real enforcement mechanisms, and extraterritorial reach. US federal regulation is moving toward deregulation. State regulation is under legal threat. The EU is the only jurisdiction where you can actually build a compliance roadmap with confidence.
For government work specifically: Microsoft achieved FedRAMP High for its full AI suite in December 2025.30 AWS Bedrock holds FedRAMP High and DoD IL5. Anthropic’s Claude reached FedRAMP High through both AWS and Google Cloud. OpenAI got prioritized FedRAMP 20x authorization.31 If you’re selling to federal agencies, FedRAMP is the gatekeeper.
Privacy: Two Tiers, One Problem
Here’s a development from late 2025 that didn’t get enough attention: OpenAI, Anthropic, and Google all quietly shifted to opt-out training data models for their consumer plans.32 If your employees are using free or personal-tier AI accounts for work, their inputs may be training the next model generation.
Enterprise tiers maintain no-training guarantees. ChatGPT Enterprise, Claude for Work, Azure OpenAI, and AWS Bedrock all contractually commit to not using customer data for training. But the gap between “enterprise plan” and “person’s personal account they use at work” is where data leaks.
This is why the AI gateway matters for data protection too. Proxying all outbound AI traffic through a gateway that can intercept and redact PII before it leaves your network boundary is becoming standard practice. Tools like Nightfall AI, Microsoft Purview DSPM for AI, and Lakera Guard handle real-time scanning and redaction. AWS Bedrock Guardrails can automatically mask 16+ types of PII.33
Despite all this tooling, 78% of organizations still can’t validate what data is entering AI training pipelines.34 Fifteen percent of employees have pasted sensitive data into public LLMs.35 The technology exists to prevent this. The organizational discipline to deploy it universally does not.
For regulated industries, the stakes are even higher. The proposed HIPAA Security Rule update, the first major revision in 20 years, would explicitly protect ePHI used in AI training data.36 The average healthcare data breach costs $10.9 million.37 On the GDPR front, OpenAI was fined €15 million by Italy’s DPA for training on personal data without adequate legal basis.38
The $2 Billion That Changed the Vendor Market Overnight
September 2025 was the month every major cybersecurity platform swallowed an AI security startup whole:
- Cisco acquired Robust Intelligence (~$400M), creating Cisco AI Defense39
- Palo Alto Networks acquired Protect AI (~$500M+), building Prisma AIRS40
- Check Point acquired Lakera (~$300M)41
- F5 acquired CalypsoAI ($180M), launching F5 AI Guardrails42
- CrowdStrike acquired Pangea (~$260M)43
- Cato Networks acquired Aim Security43
Two billion dollars in AI security acquisitions in one quarter. The standalone AI security startup category essentially got absorbed into the existing cybersecurity platform vendors. If you already have a relationship with Cisco, Palo Alto, or CrowdStrike, your AI security story now lives inside their platform. If you need independent tooling, HiddenLayer (model security, partnered with Microsoft), Arthur AI (agentic discovery and governance), Guardrails AI (open-source output validation), and Patronus AI (hallucination detection) are the remaining independents worth evaluating.
What CISOs Are Actually Worried About
I talk to security leaders regularly. Here’s the hierarchy of concerns that shows up in every conversation, backed by data from Splunk, Saviynt, Team8, and Proofpoint surveys covering thousands of CISOs:44
Data leakage via AI systems. 76%+ are worried. This is the number one concern, full stop.
Shadow AI. 90% are concerned about privacy and security implications of unsanctioned AI use.
Hallucination causing business harm. 83% worry about this, and with good reason. In the legal profession alone, hallucination incidents went from “two per week” to “two to three cases per day” by spring 2025. Over 1,005 court cases involving AI-hallucinated content have been tracked globally. Sixty-six lawyers were sanctioned for submitting fabricated citations in a single year.45
Personal liability. 76% of CISOs now worry about being personally on the hook for AI-related security incidents.44 This is new and significant.
The incidents justifying these concerns are not theoretical. In November 2025, Anthropic disclosed what appears to be the first documented large-scale cyberattack executed primarily by AI. Chinese state-sponsored actors used jailbroken Claude Code to hit approximately 30 organizations.46 GitHub Copilot had a CVSS 9.6 prompt injection vulnerability where code comments could trigger remote code execution on over 100,000 developer machines.18 Microsoft 365 Copilot experienced a zero-click data exfiltration attack via crafted emails.20
These are not edge cases. These are mainstream tools being exploited in production.
The Insurance Gap Nobody Wants to Discuss
Cyber insurance is roughly a $16 billion market projected to hit $40 billion by 2030.47 AI risks currently sit in what the industry calls “silent coverage,” implicitly included under existing policies without explicit terms, similar to how cyber risk was treated fifteen years ago.
Coalition launched deepfake-specific coverage in December 2025. Embroker has drafted explicit AI endorsements for professional liability.48 But a growing number of insurers are adding broad AI-specific exclusions to E&O, D&O, and cyber policies. If your organization is deploying AI at scale and you haven’t reviewed your insurance coverage specifically for AI-related liabilities, you have a blind spot.
On the IP front, over 151 notable copyright suits are pending against AI platforms in the US.49 A February 2026 federal court ruling held that user communications with Claude are not protected by attorney-client privilege, meaning AI inputs may be discoverable in litigation.
So What Should You Actually Do?
I’ve distilled this down to the minimum viable action list for an enterprise that’s serious about AI adoption but hasn’t yet built governance infrastructure. In priority order:
1. Find out what’s already running. Before you build anything, do a shadow AI discovery sweep. You cannot govern what you cannot see. Network monitoring tools, CASB logs, procurement records, whatever it takes to build an inventory of every AI tool touching your environment.
2. Deploy an AI gateway. Route all sanctioned AI traffic through a centralized proxy. This gives you authentication, audit logging, cost visibility, and a policy enforcement point. Do this before you expand AI usage, not after.
3. Write an acceptable use policy and make sure people actually know about it. Only 18.5% of employees know their company’s AI policy exists.3 The policy itself doesn’t need to be 50 pages. It needs to clearly state what tools are approved, what data classifications can be sent to AI services, and what human review is required before AI outputs are used in decisions or client-facing work.
4. Negotiate your vendor contracts properly. “No Training” clauses that prevent use of your data for model improvement. Zero Data Retention options where available. Data residency commitments. Model portability and exit strategy documentation. Ninety-two percent of AI vendors claim broad data usage rights.50 Read the fine print.
5. Start ISO 42001 preparation alongside your existing ISO 27001 program. The market is moving toward requiring it. Getting ahead of the wave is cheaper and less painful than scrambling when a major customer or regulator demands it.
6. Plan for the EU AI Act deadline of August 2, 2026. If you have any European exposure (customers, employees, data subjects) you need a compliance plan. Five months is tight for organizations starting from zero.
7. Build agentic AI governance now. Autonomous AI agents that can take actions, call tools, and execute multi-step workflows are the next wave. The security model for agents is a different animal than chat-based AI. If you’re deploying agents or planning to, establish governance guardrails before they go to production, not after something breaks.
The Honest Assessment
Here’s the part where I level with you. Less than 1% of enterprises have what anyone would call a mature AI governance program.51 Only 29% feel prepared to defend against AI-related threats.52 Only 37% conduct regular AI risk assessments.52 And the capability gap between AI adoption speed and governance readiness is widening, not closing.
The organizations getting this right are spending approximately 0.5-1% of their AI technology budget on governance infrastructure.53 They’re treating governance as a delivery accelerator, the thing that lets them move faster because they have guardrails, not as a brake on innovation. The data suggests they’re seeing 300-2,000% ROI on their AI investments while avoiding the incidents that cost unprepared organizations $4.8 million per breach on average.54
The window for getting governance right before something forces your hand is closing. The EU AI Act enforcement deadline is five months out. State AI laws are proliferating despite federal pushback. The insurance market is hardening. And the AI agents being deployed today are more autonomous, more capable, and more dangerous when misconfigured than anything we’ve dealt with before.
Technology changes. Human challenges don’t. The challenge right now isn’t whether AI works. It does. The challenge is whether your organization can adopt it without creating risks you don’t understand, can’t see, and aren’t insured for.
I’ve been through enough platform shifts to know how this plays out. The organizations that invest in governance early come out ahead. The ones that wait for an incident to force their hand pay a lot more, in money, in reputation, and in trust.
Build the guardrails now. You’ll thank yourself later.
Michael Rishi Forrester is a Principal Training Architect with 30 years of experience helping organizations adopt technology safely. He has trained over 1 million engineers across platforms including KodeKloud, Coursera, O’Reilly, and YouTube. His Eight Guardrails framework for AI agent safety in Kubernetes environments was published in February 2026.
Statistics Appendix
1 Knostic, “Detect and Control: Shadow AI in the Enterprise” (2025). Estimate based on enterprise shadow AI discovery audits. www.knostic.ai/blog/shad…
2 Invicti, “Shadow AI: Risks, Challenges, and Solutions in 2025.” Employee AI usage rates from enterprise surveys. www.invicti.com/blog/web-…
3 Invicti, ibid. Only 18.5% of employees surveyed were aware of their employer’s AI acceptable use policy.
4 ISACA, “Artificial Intelligence Acceptable Use Policy Template” (2025). Based on ISACA member survey data on formal AI policy adoption. www.isaca.org/resources…
5 Gartner, prediction cited in KPMG, “Shadow AI Is Already Here: Take Control, Reduce Risk, and Unleash Innovation” (2025). kpmg.com/kpmg-us/c…
6 KPMG, ibid. Incremental cost of shadow AI-related breaches versus baseline security incidents.
7 Cloud Security Alliance, AI Controls Matrix (July 2025). 243 control objectives across 18 security domains for AI systems. cloudsecurityalliance.org
8 Computer Weekly, “Singapore debuts world’s first governance framework for agentic AI” (January 2026). Published by Singapore’s IMDA and AI Verify Foundation. www.computerweekly.com/news/3666…
9 NIST, Internal Report IR 8596 (Initial Public Draft, December 2025). “Cybersecurity Framework Profile for AI Systems.” nvlpubs.nist.gov/nistpubs/… NCCoE project page: www.nccoe.nist.gov/projects/…
10 A-LIGN, “ISO 42001 Certification” (2025). Microsoft SSPA v10 program requirements for AI suppliers. www.a-lign.com/service/i…
11 Individual press releases: Cornerstone OnDemand (https://www.cornerstoneondemand.com), Hudson Talent Solutions (GlobeNewswire, February 24, 2026), Greenhouse (PR Newswire, March 2026), Maven AGI (PR Newswire, February 2026).
12 A-LIGN, 2025 Benchmark Report. 76% of surveyed companies plan AI audit or certification within 24 months. www.a-lign.com/service/i…
13 European Commission, “AI Act: Shaping Europe’s Digital Future.” Enforcement timeline and penalty structure. digital-strategy.ec.europa.eu/en/polici… Implementation timeline: ai-act-service-desk.ec.europa.eu/en/ai-act…
14 Confident AI, “OWASP Top 10 2025 for LLM Applications: What’s New?” (2025). LLM07 (System Prompt Leakage) and LLM08 (Vector and Embedding Weaknesses) added. www.confident-ai.com/blog/owas…
15 AWS, Amazon Bedrock Guardrails product page. Automated reasoning checks for factual accuracy. aws.amazon.com/bedrock/g…
16 VentureBeat, “Nvidia tackles agentic AI safety and security with new NeMo Guardrails NIMs” (2025). Training dataset of 17,000+ known jailbreak attacks. venturebeat.com/ai/nvidia…
17 Vectra AI, “Prompt injection: types, real-world CVEs, and enterprise defenses.” Attack success rates of 50-84% across configurations. www.vectra.ai/topics/pr… Obsidian Security, “Prompt Injection Attacks: The Most Common AI Exploit in 2025.” www.obsidiansecurity.com/blog/prom…
18 Vectra AI, ibid. CVE details for Microsoft Copilot (CVSS 9.3), GitHub Copilot (CVSS 9.6), Cursor IDE (CVSS 9.8).
19 Gartner, cited in Adoptify AI, “2026 Microsoft Copilot Governance Framework: Executive Guide.” 6% pilot-to-production conversion rate, 60% stuck in pilots. www.adoptify.ai/blogs/202…
20 Knostic, “Microsoft Copilot data security and governance: A practical guide for CISOs.” AI data security incidents rising from 27% to 40%. www.knostic.ai/blog/micr…
21 Data Studios, “Claude is preferred by enterprises, ChatGPT by employees: how generative AI choices are changing within companies in 2025.” Claude at 32% of enterprise workloads, OpenAI at 25% (down from 50%). www.datastudios.org/post/clau…
22 Microsoft Learn, “Anthropic as a subprocessor for Microsoft Online Services” (January 2026). learn.microsoft.com/en-us/cop…
23 Data Studios, ibid. ChatGPT Enterprise at 5M+ paying business clients.
24 Squire Patton Boggs, “Key Insights on President Trump’s New AI Executive Order and Policy Regulatory Implications.” EO 14179 replacing EO 14110. www.squirepattonboggs.com/insights/…
25 Hunton Andrews Kurth, “OMB Issues Revised Policies on AI Use and Procurement by Federal Agencies.” M-25-21 replacing M-24-10. www.hunton.com/privacy-a… Wiley, “Trump Administration Revamps Guidance on Federal Use and Procurement of AI.” www.wiley.law/alert-Tru…
26 Sidley Austin, “Unpacking the December 11, 2025 Executive Order: Ensuring a National Policy Framework for Artificial Intelligence.” AI Litigation Task Force and state preemption provisions. www.sidley.com/en/insigh…
27 Hudson Cook, “Colorado Special Session Update: AI Law Delayed to June 2026.” www.hudsoncook.com/article/c… Epstein Becker Green, “Colorado’s Historic AI Law Survives Without Delay (So Far).” www.workforcebulletin.com/colorados…
28 Credo AI, “Latest AI Regulations Update: What Enterprises Need to Know in 2026.” 1,000+ state-level AI bills in 2025. www.credo.ai/blog/late…
29 Crowell & Moring, “CMMC for AI? Defense Policy Law Imposes AI Security Framework and Requirements on Contractors.” FY2026 NDAA provisions. www.crowell.com/en/insigh…
30 FinancialContent, “Microsoft Confirms All AI Services Meet FedRAMP High Security Standards” (December 30, 2025). markets.financialcontent.com/wral/arti…
31 GSA, “GSA and FedRAMP Announce Major Initiative: Prioritizing 20x Authorizations for AI Cloud Solutions” (August 25, 2025). www.gsa.gov/about-us/… FedScoop, “ChatGPT gets one step closer to widespread government use.” fedscoop.com/chatgpt-g…
32 TV News Check, “OpenAI, Google & Anthropic All Just Quietly Backtracked User Privacy Settings.” tvnewscheck.com/business/… Shelly Palmer, “Anthropic’s Privacy Pivot: Users Must Opt-Out by September 28” (August 2025). shellypalmer.com/2025/08/a…
33 AWS, Amazon Bedrock Guardrails. PII masking for 16+ data types. aws.amazon.com/bedrock/g…
34 Protecto AI, “AI Data Privacy Statistics & Trends 2025.” 78% of organizations unable to validate data in AI training pipelines. www.protecto.ai/blog/ai-d…
35 Protecto AI, ibid.; Lakera, “Data Loss Prevention (DLP): A Complete Guide for the GenAI Era.” 15% of employees have pasted sensitive data into public LLMs. www.lakera.ai/blog/data…
36 HIPAA Journal, “When AI Technology and HIPAA Collide.” Proposed Security Rule update covering ePHI in AI training data. www.hipaajournal.com/when-ai-t…
37 HIPAA Journal, ibid. $10.9M average cost of healthcare data breach (IBM/Ponemon 2025 data).
38 SecurePrivacy, “EU AI Act 2026 Compliance Guide.” Italian DPA Garante €15M fine against OpenAI. secureprivacy.ai/blog/eu-a…
39 Calcalist Tech, “Inside Yaron Singer’s surprising $400M sale to Cisco” (2025). www.calcalistech.com/ctechnews… Cisco, “Robust Intelligence Is Now Part of Cisco.” www.cisco.com/site/us/e…
40 Palo Alto Networks, “Palo Alto Networks Completes Acquisition of Protect AI” (2025). www.paloaltonetworks.com/company/p… GeekWire, “Palo Alto Networks to acquire Seattle cybersecurity startup Protect AI.” www.geekwire.com/2025/palo…
41 ChannelE2E, “Check Point Acquires Lakera to Build Full AI Security Stack.” www.channele2e.com/news/chec…
42 F5, “F5 to acquire CalypsoAI to bring advanced AI guardrails to large enterprises.” www.f5.com/company/n…
43 SecurityWeek, “Cybersecurity M&A Roundup: 40 Deals Announced in September 2025.” CrowdStrike/Pangea and Cato/Aim Security. www.securityweek.com/cybersecu… Infosecurity Magazine, “Cybersecurity M&A Roundup: CrowdStrike, SentinelOne and Check Point In.” www.infosecurity-magazine.com/news-feat…
44 CISO survey data aggregated from: Splunk/Cisco 2025 CISO Report (650 CISOs), Saviynt State of Identity Security Survey (235 CISOs), Team8 2025 CISO Village Survey (110+ CISOs), Proofpoint 2025 Voice of the CISO Report (1,600 CISOs globally). Percentages represent cross-survey averages where data overlaps.
45 Cronkite News / Arizona PBS, “As more lawyers fall for AI hallucinations, ChatGPT says: Check my work” (October 28, 2025). 1,005+ tracked cases, 66 sanctions, incident frequency increase. cronkitenews.azpbs.org/2025/10/2…
46 Anthropic threat intelligence disclosure, GTG-1002 (November 2025). Chinese state-sponsored actors using jailbroken Claude Code against approximately 30 organizations.
47 WTW, “Cyber risk: A look ahead to 2026” (February 2026). $16B current market, $40B projection by 2030. www.wtwco.com/en-us/ins… WTW, “Insuring the AI Age” (December 2025). www.wtwco.com/en-us/ins…
48 Insurance Business America, “Cyber insurance enters the AI risk era as limits, wording and underwriting models shift.” Coalition deepfake coverage and Embroker AI endorsements. www.insurancebusinessmag.com/us/news/c…
49 Ropes & Gray, “An End-of-Year Update to the Current State of AI Related Copyright Litigation” (December 2024, updated 2025). 151+ notable pending suits. www.ropesgray.com/en/insigh…
50 Data & Trusted AI Alliance, AI Vendor Assessment Framework (October 2025). 92% of AI vendors claim broad data usage rights versus 63% market average. Referenced in vendor evaluation research from Netguru and Pertama Partners.
51 Liminal, “Enterprise AI Governance: Complete Implementation Guide (2025).” Less than 1% of enterprises with mature AI governance programs despite 78% using AI. www.liminal.ai/blog/ente…
52 Akto, “State of Agentic AI Security 2025: Adoption, Risks & CISO Insights.” 29% prepared to defend against AI threats, 37% conduct regular AI risk assessments. www.akto.io/blog/stat…
53 SecurePrivacy, “AI Governance: Enterprise Compliance & Risk Management Guide (2026).” Governance spend benchmarks of 0.5-1% of AI technology investment. secureprivacy.ai/blog/ai-g…
54 IBM, Cost of a Data Breach Report 2025. $4.8M global average cost per data breach. Widely cited across enterprise security literature.
Opinion: I Just Helped an Enterprise Adopt AI. Here's What Nobody's Talking About.
March 2026 | Michael Rishi Forrester
I’ve spent the last 30 years watching enterprises adopt new technology. Mainframes to client-server. On-prem to cloud. Monoliths to microservices. Every wave has the same arc: early enthusiasm, chaotic adoption, scary incidents, and then, eventually, grown-up governance.
AI is somewhere between phases two and three right now. And after spending the last several months helping clients figure out how to actually integrate AI into their operations without burning the house down, I can tell you: the conversation most organizations are having about AI safety is the wrong conversation.
They’re asking “is AI safe?” when they should be asking “do we even know what our employees are doing with AI right now?”
The answer, almost universally, is no.
The Shadow AI Problem Is Worse Than You Think
Here’s a number that should keep every CISO up at night: the average enterprise currently has roughly 1,200 unofficial AI applications running inside it.1 Not sanctioned. Not monitored. Not governed. Just people doing their jobs with whatever tools they find useful.
Fifty-eight percent of employees use AI productivity tools daily.2 Only 18.5% are aware their company even has an AI policy.3 And only 28% of organizations have written a formal one.4
This isn’t a security problem. It’s an organizational awareness problem wearing a security costume.
When I sit down with a client’s leadership team, I don’t start with frameworks or compliance checklists. I start with a question: Do you know how many AI tools are running inside your company right now? The silence that follows tells me everything.
KPMG calls it shadow AI. Gartner predicts that through 2026, at least 80% of unauthorized AI transactions will come from internal policy violations, not external attackers.5 When shadow AI does lead to a breach, it costs an average of $670,000 more than a comparable incident.6 The threat isn’t some sophisticated nation-state attack. It’s Karen in accounting pasting customer financial data into ChatGPT because it helps her build pivot tables faster.
What the Frameworks Actually Say (And What They Don’t)
If you’re an architect or a leader trying to get your arms around AI governance, you’re drowning in frameworks right now. NIST AI RMF. ISO 42001. EU AI Act. OWASP Top 10 for LLMs. The Cloud Security Alliance’s AI Controls Matrix with its 243 control objectives across 18 domains.7 Singapore just published the world’s first governance framework specifically for agentic AI systems.8
That’s a lot of paper.
Here’s what actually matters if you’re building a governance program today:
NIST AI RMF is the operational backbone in the US. It’s what the Colorado AI Act references for safe harbor protection, and multinationals use it as the layer beneath regulatory compliance. NIST also dropped a preliminary draft of the Cyber AI Profile (IR 8596) in December 2025, which maps AI risks onto the Cybersecurity Framework 2.0 structure.9 If you’re only going to read one thing, read that.
ISO 42001 went from “nice to have” to table stakes in about six months. Microsoft’s supplier program now mandates it for AI systems handling sensitive data.10 Cornerstone OnDemand, Hudson Talent Solutions, Greenhouse, and Maven AGI all certified in Q1 2026.11 Seventy-six percent of companies plan to pursue an AI audit or certification within the next two years.12 If you sell to enterprises, this is coming for you whether you like it or not.
The EU AI Act is the one with teeth. Prohibited practices became enforceable in February 2025. GPAI model obligations kicked in August 2025. The big date is August 2, 2026, when high-risk AI system requirements become fully enforceable, with fines up to €35 million or 7% of global turnover.13 Five months from now. If you have European customers or European employees and you’re not already in motion, you’re late.
OWASP Top 10 for LLMs 2025 added two new categories worth knowing: System Prompt Leakage and Vector/Embedding Weaknesses (the latter targeting RAG systems).14 Prompt injection is still number one. It will probably always be number one.
What none of these frameworks will tell you is how to get 50,000 employees to stop pasting proprietary data into consumer AI tools by next Tuesday. That’s on you.
The Architecture That’s Actually Emerging
After talking to dozens of teams building AI integrations, a pattern has solidified. It’s not revolutionary. It’s just what works.
The AI gateway is the centerpiece. Think of it as a proxy layer between your applications and every AI model provider you use. All LLM traffic routes through it. Authentication, content policies, PII redaction, cost controls, rate limiting, audit logging, all in one place. TrueFoundry, Portkey, and Kong all offer commercial versions. LiteLLM is a solid open-source option. Microsoft pushes Azure API Management for this role.
Behind the gateway, you layer guardrails. AWS Bedrock Guardrails can filter across six harm categories and recently added automated reasoning checks that use formal mathematical logic, claiming 99% factual accuracy verification.15 NVIDIA’s NeMo Guardrails provides open-source microservices for content safety and jailbreak detection trained on 17,000 known attacks.16 Guardrails AI offers community-contributed validators with 5,900 GitHub stars.
For prompt injection specifically, and I need to be honest here, there is no complete fix. OpenAI acknowledged this when they launched Lockdown Mode for ChatGPT in February 2026. Attack success rates hit 50-84% depending on configuration.17 Critical CVEs hit Microsoft Copilot (CVSS 9.3), GitHub Copilot (CVSS 9.6), and Cursor IDE (CVSS 9.8) in 2025.18 The best you can do is defense-in-depth: input validation, output scanning, privilege minimization for agent tool access, and behavioral monitoring. Infrastructure-level enforcement, not tool-level safety features. (This is the thesis of my Eight Guardrails framework: guardrails enforced by the AI tool itself are bypassable; guardrails enforced by infrastructure are not.)
For RAG systems, you need governance across three layers: what goes into your knowledge base, what gets retrieved and injected into prompts, and what comes out the other side. Document-level access control enforcement during retrieval is critical. If your RAG system doesn’t respect the same permissions as your document management system, you’ve just built a compliance bypass tool and called it a productivity feature.
The AI Assistant Wars, From a Security Perspective
Every enterprise I work with is running at least two, usually three AI platforms simultaneously. Here’s where things stand:
Microsoft 365 Copilot has a deployment problem that nobody at Microsoft wants to talk about publicly. Only 6% of organizations moved from pilot to production according to Gartner.19 Sixty percent are stuck in pilot purgatory. The core issue isn’t the technology. It’s that Copilot accesses anything available to the user across SharePoint, Teams, OneDrive, and Exchange. Suddenly, years of sloppy permission management becomes a data exposure issue. AI-related data security incidents jumped from 27% to 40% between 2023 and 2024.20
Claude is quietly winning the enterprise market. Thirty-two percent of enterprise LLM workloads now run on Claude models, overtaking OpenAI’s 25% (which dropped from 50% two years ago).21 Claude is particularly strong in healthcare, finance, and legal, sectors where Anthropic’s safety-first positioning and contractual data protections matter. As of January 2026, Anthropic is even integrated as a subprocessor in Microsoft 365 Copilot.22
ChatGPT Enterprise still has the largest user base with over 5 million paying business clients.23 It’s the consumer default that enterprises grudgingly formalize.
The multi-model reality means your governance approach can’t be vendor-specific. You need policies and controls that work across all of them, which brings us back to the gateway architecture.
The US Regulatory Situation Is a Mess
I’m not going to sugarcoat this: if you’re trying to plan a compliance strategy around US federal AI regulation, good luck.
The Trump administration revoked Biden’s EO 14110 on day one and replaced it with EO 14179, “Removing Barriers to American Leadership in Artificial Intelligence."24 The OMB rescinded Biden’s risk management memo and replaced it with M-25-21, which dropped the categories of “rights-impacting” and “safety-impacting” AI in favor of “high-impact” AI.25 The new memo notably does not reference the NIST AI RMF at all.
Then came the December 2025 executive order on federal AI preemption, which directed the Attorney General to create an AI Litigation Task Force specifically to challenge state AI laws.26 It conditions broadband funding eligibility on states not having “onerous AI laws.” Whether an executive order can actually preempt state legislation without Congress is a live legal question that’s almost certainly heading to court.
Meanwhile, the Colorado AI Act (the first broad state AI law) got delayed from February to June 30, 2026.27 Over 1,000 AI-related bills were introduced across states in 2025.28 The FY2026 NDAA directs DoD to build an AI security framework for defense contractors, essentially a “CMMC for AI."29 The NDAA also banned DeepSeek from all DoD contracts and systems.
What this means practically: plan for the EU AI Act as your compliance floor. It’s the only regulation with clear deadlines, real enforcement mechanisms, and extraterritorial reach. US federal regulation is moving toward deregulation. State regulation is under legal threat. The EU is the only jurisdiction where you can actually build a compliance roadmap with confidence.
For government work specifically: Microsoft achieved FedRAMP High for its full AI suite in December 2025.30 AWS Bedrock holds FedRAMP High and DoD IL5. Anthropic’s Claude reached FedRAMP High through both AWS and Google Cloud. OpenAI got prioritized FedRAMP 20x authorization.31 If you’re selling to federal agencies, FedRAMP is the gatekeeper.
Privacy: Two Tiers, One Problem
Here’s a development from late 2025 that didn’t get enough attention: OpenAI, Anthropic, and Google all quietly shifted to opt-out training data models for their consumer plans.32 If your employees are using free or personal-tier AI accounts for work, their inputs may be training the next model generation.
Enterprise tiers maintain no-training guarantees. ChatGPT Enterprise, Claude for Work, Azure OpenAI, and AWS Bedrock all contractually commit to not using customer data for training. But the gap between “enterprise plan” and “person’s personal account they use at work” is where data leaks.
This is why the AI gateway matters for data protection too. Proxying all outbound AI traffic through a gateway that can intercept and redact PII before it leaves your network boundary is becoming standard practice. Tools like Nightfall AI, Microsoft Purview DSPM for AI, and Lakera Guard handle real-time scanning and redaction. AWS Bedrock Guardrails can automatically mask 16+ types of PII.33
Despite all this tooling, 78% of organizations still can’t validate what data is entering AI training pipelines.34 Fifteen percent of employees have pasted sensitive data into public LLMs.35 The technology exists to prevent this. The organizational discipline to deploy it universally does not.
For regulated industries, the stakes are even higher. The proposed HIPAA Security Rule update, the first major revision in 20 years, would explicitly protect ePHI used in AI training data.36 The average healthcare data breach costs $10.9 million.37 On the GDPR front, OpenAI was fined €15 million by Italy’s DPA for training on personal data without adequate legal basis.38
The $2 Billion That Changed the Vendor Market Overnight
September 2025 was the month every major cybersecurity platform swallowed an AI security startup whole:
- Cisco acquired Robust Intelligence (~$400M), creating Cisco AI Defense39
- Palo Alto Networks acquired Protect AI (~$500M+), building Prisma AIRS40
- Check Point acquired Lakera (~$300M)41
- F5 acquired CalypsoAI ($180M), launching F5 AI Guardrails42
- CrowdStrike acquired Pangea (~$260M)43
- Cato Networks acquired Aim Security43
Two billion dollars in AI security acquisitions in one quarter. The standalone AI security startup category essentially got absorbed into the existing cybersecurity platform vendors. If you already have a relationship with Cisco, Palo Alto, or CrowdStrike, your AI security story now lives inside their platform. If you need independent tooling, HiddenLayer (model security, partnered with Microsoft), Arthur AI (agentic discovery and governance), Guardrails AI (open-source output validation), and Patronus AI (hallucination detection) are the remaining independents worth evaluating.
What CISOs Are Actually Worried About
I talk to security leaders regularly. Here’s the hierarchy of concerns that shows up in every conversation, backed by data from Splunk, Saviynt, Team8, and Proofpoint surveys covering thousands of CISOs:44
Data leakage via AI systems. 76%+ are worried. This is the number one concern, full stop.
Shadow AI. 90% are concerned about privacy and security implications of unsanctioned AI use.
Hallucination causing business harm. 83% worry about this, and with good reason. In the legal profession alone, hallucination incidents went from “two per week” to “two to three cases per day” by spring 2025. Over 1,005 court cases involving AI-hallucinated content have been tracked globally. Sixty-six lawyers were sanctioned for submitting fabricated citations in a single year.45
Personal liability. 76% of CISOs now worry about being personally on the hook for AI-related security incidents.44 This is new and significant.
The incidents justifying these concerns are not theoretical. In November 2025, Anthropic disclosed what appears to be the first documented large-scale cyberattack executed primarily by AI. Chinese state-sponsored actors used jailbroken Claude Code to hit approximately 30 organizations.46 GitHub Copilot had a CVSS 9.6 prompt injection vulnerability where code comments could trigger remote code execution on over 100,000 developer machines.18 Microsoft 365 Copilot experienced a zero-click data exfiltration attack via crafted emails.20
These are not edge cases. These are mainstream tools being exploited in production.
The Insurance Gap Nobody Wants to Discuss
Cyber insurance is roughly a $16 billion market projected to hit $40 billion by 2030.47 AI risks currently sit in what the industry calls “silent coverage,” implicitly included under existing policies without explicit terms, similar to how cyber risk was treated fifteen years ago.
Coalition launched deepfake-specific coverage in December 2025. Embroker has drafted explicit AI endorsements for professional liability.48 But a growing number of insurers are adding broad AI-specific exclusions to E&O, D&O, and cyber policies. If your organization is deploying AI at scale and you haven’t reviewed your insurance coverage specifically for AI-related liabilities, you have a blind spot.
On the IP front, over 151 notable copyright suits are pending against AI platforms in the US.49 A February 2026 federal court ruling held that user communications with Claude are not protected by attorney-client privilege, meaning AI inputs may be discoverable in litigation.
So What Should You Actually Do?
I’ve distilled this down to the minimum viable action list for an enterprise that’s serious about AI adoption but hasn’t yet built governance infrastructure. In priority order:
1. Find out what’s already running. Before you build anything, do a shadow AI discovery sweep. You cannot govern what you cannot see. Network monitoring tools, CASB logs, procurement records, whatever it takes to build an inventory of every AI tool touching your environment.
2. Deploy an AI gateway. Route all sanctioned AI traffic through a centralized proxy. This gives you authentication, audit logging, cost visibility, and a policy enforcement point. Do this before you expand AI usage, not after.
3. Write an acceptable use policy and make sure people actually know about it. Only 18.5% of employees know their company’s AI policy exists.3 The policy itself doesn’t need to be 50 pages. It needs to clearly state what tools are approved, what data classifications can be sent to AI services, and what human review is required before AI outputs are used in decisions or client-facing work.
4. Negotiate your vendor contracts properly. “No Training” clauses that prevent use of your data for model improvement. Zero Data Retention options where available. Data residency commitments. Model portability and exit strategy documentation. Ninety-two percent of AI vendors claim broad data usage rights.50 Read the fine print.
5. Start ISO 42001 preparation alongside your existing ISO 27001 program. The market is moving toward requiring it. Getting ahead of the wave is cheaper and less painful than scrambling when a major customer or regulator demands it.
6. Plan for the EU AI Act deadline of August 2, 2026. If you have any European exposure (customers, employees, data subjects) you need a compliance plan. Five months is tight for organizations starting from zero.
7. Build agentic AI governance now. Autonomous AI agents that can take actions, call tools, and execute multi-step workflows are the next wave. The security model for agents is a different animal than chat-based AI. If you’re deploying agents or planning to, establish governance guardrails before they go to production, not after something breaks.
The Honest Assessment
Here’s the part where I level with you. Less than 1% of enterprises have what anyone would call a mature AI governance program.51 Only 29% feel prepared to defend against AI-related threats.52 Only 37% conduct regular AI risk assessments.52 And the capability gap between AI adoption speed and governance readiness is widening, not closing.
The organizations getting this right are spending approximately 0.5-1% of their AI technology budget on governance infrastructure.53 They’re treating governance as a delivery accelerator, the thing that lets them move faster because they have guardrails, not as a brake on innovation. The data suggests they’re seeing 300-2,000% ROI on their AI investments while avoiding the incidents that cost unprepared organizations $4.8 million per breach on average.54
The window for getting governance right before something forces your hand is closing. The EU AI Act enforcement deadline is five months out. State AI laws are proliferating despite federal pushback. The insurance market is hardening. And the AI agents being deployed today are more autonomous, more capable, and more dangerous when misconfigured than anything we’ve dealt with before.
Technology changes. Human challenges don’t. The challenge right now isn’t whether AI works. It does. The challenge is whether your organization can adopt it without creating risks you don’t understand, can’t see, and aren’t insured for.
I’ve been through enough platform shifts to know how this plays out. The organizations that invest in governance early come out ahead. The ones that wait for an incident to force their hand pay a lot more, in money, in reputation, and in trust.
Build the guardrails now. You’ll thank yourself later.
Michael Rishi Forrester is a Principal Training Architect with 30 years of experience helping organizations adopt technology safely. He has trained over 1 million engineers across platforms including KodeKloud, Coursera, O’Reilly, and YouTube. His Eight Guardrails framework for AI agent safety in Kubernetes environments was published in February 2026.
Statistics Appendix
1 Knostic, “Detect and Control: Shadow AI in the Enterprise” (2025). Estimate based on enterprise shadow AI discovery audits. www.knostic.ai/blog/shad…
2 Invicti, “Shadow AI: Risks, Challenges, and Solutions in 2025.” Employee AI usage rates from enterprise surveys. www.invicti.com/blog/web-…
3 Invicti, ibid. Only 18.5% of employees surveyed were aware of their employer’s AI acceptable use policy.
4 ISACA, “Artificial Intelligence Acceptable Use Policy Template” (2025). Based on ISACA member survey data on formal AI policy adoption. www.isaca.org/resources…
5 Gartner, prediction cited in KPMG, “Shadow AI Is Already Here: Take Control, Reduce Risk, and Unleash Innovation” (2025). kpmg.com/kpmg-us/c…
6 KPMG, ibid. Incremental cost of shadow AI-related breaches versus baseline security incidents.
7 Cloud Security Alliance, AI Controls Matrix (July 2025). 243 control objectives across 18 security domains for AI systems. cloudsecurityalliance.org
8 Computer Weekly, “Singapore debuts world’s first governance framework for agentic AI” (January 2026). Published by Singapore’s IMDA and AI Verify Foundation. www.computerweekly.com/news/3666…
9 NIST, Internal Report IR 8596 (Initial Public Draft, December 2025). “Cybersecurity Framework Profile for AI Systems.” nvlpubs.nist.gov/nistpubs/… NCCoE project page: www.nccoe.nist.gov/projects/…
10 A-LIGN, “ISO 42001 Certification” (2025). Microsoft SSPA v10 program requirements for AI suppliers. www.a-lign.com/service/i…
11 Individual press releases: Cornerstone OnDemand (https://www.cornerstoneondemand.com), Hudson Talent Solutions (GlobeNewswire, February 24, 2026), Greenhouse (PR Newswire, March 2026), Maven AGI (PR Newswire, February 2026).
12 A-LIGN, 2025 Benchmark Report. 76% of surveyed companies plan AI audit or certification within 24 months. www.a-lign.com/service/i…
13 European Commission, “AI Act: Shaping Europe’s Digital Future.” Enforcement timeline and penalty structure. digital-strategy.ec.europa.eu/en/polici… Implementation timeline: ai-act-service-desk.ec.europa.eu/en/ai-act…
14 Confident AI, “OWASP Top 10 2025 for LLM Applications: What’s New?” (2025). LLM07 (System Prompt Leakage) and LLM08 (Vector and Embedding Weaknesses) added. www.confident-ai.com/blog/owas…
15 AWS, Amazon Bedrock Guardrails product page. Automated reasoning checks for factual accuracy. aws.amazon.com/bedrock/g…
16 VentureBeat, “Nvidia tackles agentic AI safety and security with new NeMo Guardrails NIMs” (2025). Training dataset of 17,000+ known jailbreak attacks. venturebeat.com/ai/nvidia…
17 Vectra AI, “Prompt injection: types, real-world CVEs, and enterprise defenses.” Attack success rates of 50-84% across configurations. www.vectra.ai/topics/pr… Obsidian Security, “Prompt Injection Attacks: The Most Common AI Exploit in 2025.” www.obsidiansecurity.com/blog/prom…
18 Vectra AI, ibid. CVE details for Microsoft Copilot (CVSS 9.3), GitHub Copilot (CVSS 9.6), Cursor IDE (CVSS 9.8).
19 Gartner, cited in Adoptify AI, “2026 Microsoft Copilot Governance Framework: Executive Guide.” 6% pilot-to-production conversion rate, 60% stuck in pilots. www.adoptify.ai/blogs/202…
20 Knostic, “Microsoft Copilot data security and governance: A practical guide for CISOs.” AI data security incidents rising from 27% to 40%. www.knostic.ai/blog/micr…
21 Data Studios, “Claude is preferred by enterprises, ChatGPT by employees: how generative AI choices are changing within companies in 2025.” Claude at 32% of enterprise workloads, OpenAI at 25% (down from 50%). www.datastudios.org/post/clau…
22 Microsoft Learn, “Anthropic as a subprocessor for Microsoft Online Services” (January 2026). learn.microsoft.com/en-us/cop…
23 Data Studios, ibid. ChatGPT Enterprise at 5M+ paying business clients.
24 Squire Patton Boggs, “Key Insights on President Trump’s New AI Executive Order and Policy Regulatory Implications.” EO 14179 replacing EO 14110. www.squirepattonboggs.com/insights/…
25 Hunton Andrews Kurth, “OMB Issues Revised Policies on AI Use and Procurement by Federal Agencies.” M-25-21 replacing M-24-10. www.hunton.com/privacy-a… Wiley, “Trump Administration Revamps Guidance on Federal Use and Procurement of AI.” www.wiley.law/alert-Tru…
26 Sidley Austin, “Unpacking the December 11, 2025 Executive Order: Ensuring a National Policy Framework for Artificial Intelligence.” AI Litigation Task Force and state preemption provisions. www.sidley.com/en/insigh…
27 Hudson Cook, “Colorado Special Session Update: AI Law Delayed to June 2026.” www.hudsoncook.com/article/c… Epstein Becker Green, “Colorado’s Historic AI Law Survives Without Delay (So Far).” www.workforcebulletin.com/colorados…
28 Credo AI, “Latest AI Regulations Update: What Enterprises Need to Know in 2026.” 1,000+ state-level AI bills in 2025. www.credo.ai/blog/late…
29 Crowell & Moring, “CMMC for AI? Defense Policy Law Imposes AI Security Framework and Requirements on Contractors.” FY2026 NDAA provisions. www.crowell.com/en/insigh…
30 FinancialContent, “Microsoft Confirms All AI Services Meet FedRAMP High Security Standards” (December 30, 2025). markets.financialcontent.com/wral/arti…
31 GSA, “GSA and FedRAMP Announce Major Initiative: Prioritizing 20x Authorizations for AI Cloud Solutions” (August 25, 2025). www.gsa.gov/about-us/… FedScoop, “ChatGPT gets one step closer to widespread government use.” fedscoop.com/chatgpt-g…
32 TV News Check, “OpenAI, Google & Anthropic All Just Quietly Backtracked User Privacy Settings.” tvnewscheck.com/business/… Shelly Palmer, “Anthropic’s Privacy Pivot: Users Must Opt-Out by September 28” (August 2025). shellypalmer.com/2025/08/a…
33 AWS, Amazon Bedrock Guardrails. PII masking for 16+ data types. aws.amazon.com/bedrock/g…
34 Protecto AI, “AI Data Privacy Statistics & Trends 2025.” 78% of organizations unable to validate data in AI training pipelines. www.protecto.ai/blog/ai-d…
35 Protecto AI, ibid.; Lakera, “Data Loss Prevention (DLP): A Complete Guide for the GenAI Era.” 15% of employees have pasted sensitive data into public LLMs. www.lakera.ai/blog/data…
36 HIPAA Journal, “When AI Technology and HIPAA Collide.” Proposed Security Rule update covering ePHI in AI training data. www.hipaajournal.com/when-ai-t…
37 HIPAA Journal, ibid. $10.9M average cost of healthcare data breach (IBM/Ponemon 2025 data).
38 SecurePrivacy, “EU AI Act 2026 Compliance Guide.” Italian DPA Garante €15M fine against OpenAI. secureprivacy.ai/blog/eu-a…
39 Calcalist Tech, “Inside Yaron Singer’s surprising $400M sale to Cisco” (2025). www.calcalistech.com/ctechnews… Cisco, “Robust Intelligence Is Now Part of Cisco.” www.cisco.com/site/us/e…
40 Palo Alto Networks, “Palo Alto Networks Completes Acquisition of Protect AI” (2025). www.paloaltonetworks.com/company/p… GeekWire, “Palo Alto Networks to acquire Seattle cybersecurity startup Protect AI.” www.geekwire.com/2025/palo…
41 ChannelE2E, “Check Point Acquires Lakera to Build Full AI Security Stack.” www.channele2e.com/news/chec…
42 F5, “F5 to acquire CalypsoAI to bring advanced AI guardrails to large enterprises.” www.f5.com/company/n…
43 SecurityWeek, “Cybersecurity M&A Roundup: 40 Deals Announced in September 2025.” CrowdStrike/Pangea and Cato/Aim Security. www.securityweek.com/cybersecu… Infosecurity Magazine, “Cybersecurity M&A Roundup: CrowdStrike, SentinelOne and Check Point In.” www.infosecurity-magazine.com/news-feat…
44 CISO survey data aggregated from: Splunk/Cisco 2025 CISO Report (650 CISOs), Saviynt State of Identity Security Survey (235 CISOs), Team8 2025 CISO Village Survey (110+ CISOs), Proofpoint 2025 Voice of the CISO Report (1,600 CISOs globally). Percentages represent cross-survey averages where data overlaps.
45 Cronkite News / Arizona PBS, “As more lawyers fall for AI hallucinations, ChatGPT says: Check my work” (October 28, 2025). 1,005+ tracked cases, 66 sanctions, incident frequency increase. cronkitenews.azpbs.org/2025/10/2…
46 Anthropic threat intelligence disclosure, GTG-1002 (November 2025). Chinese state-sponsored actors using jailbroken Claude Code against approximately 30 organizations.
47 WTW, “Cyber risk: A look ahead to 2026” (February 2026). $16B current market, $40B projection by 2030. www.wtwco.com/en-us/ins… WTW, “Insuring the AI Age” (December 2025). www.wtwco.com/en-us/ins…
48 Insurance Business America, “Cyber insurance enters the AI risk era as limits, wording and underwriting models shift.” Coalition deepfake coverage and Embroker AI endorsements. www.insurancebusinessmag.com/us/news/c…
49 Ropes & Gray, “An End-of-Year Update to the Current State of AI Related Copyright Litigation” (December 2024, updated 2025). 151+ notable pending suits. www.ropesgray.com/en/insigh…
50 Data & Trusted AI Alliance, AI Vendor Assessment Framework (October 2025). 92% of AI vendors claim broad data usage rights versus 63% market average. Referenced in vendor evaluation research from Netguru and Pertama Partners.
51 Liminal, “Enterprise AI Governance: Complete Implementation Guide (2025).” Less than 1% of enterprises with mature AI governance programs despite 78% using AI. www.liminal.ai/blog/ente…
52 Akto, “State of Agentic AI Security 2025: Adoption, Risks & CISO Insights.” 29% prepared to defend against AI threats, 37% conduct regular AI risk assessments. www.akto.io/blog/stat…
53 SecurePrivacy, “AI Governance: Enterprise Compliance & Risk Management Guide (2026).” Governance spend benchmarks of 0.5-1% of AI technology investment. secureprivacy.ai/blog/ai-g…
54 IBM, Cost of a Data Breach Report 2025. $4.8M global average cost per data breach. Widely cited across enterprise security literature.
Deep Dive: The Platform Engineer's Guide to AI Safety — You Already Know It. You Just Don't Know It Yet.
Your team just shipped an AI feature. Maybe it’s a chatbot for customer support. Maybe it’s a code assistant integrated into your CI/CD pipeline. Maybe it’s an agent that can spin up infrastructure based on natural language requests.
And somewhere in the back of your mind, you’re wondering: Is this safe? What does “safe” even mean here? And why does everyone talking about AI safety sound like they’re either preparing for the apocalypse or dismissing the whole thing as academic noise?
Here’s what 25 years in operations taught me, from Red Hat to ThoughtWorks to AWS: every “revolutionary” technology eventually reveals itself as a variation on problems we’ve already solved. Cloud computing was just “someone else’s computer” with better APIs. Kubernetes was just “distributed systems orchestration” with a steeper learning curve. And AI safety? It’s tiered security frameworks and policy-as-code wearing a new hat.
You already know how to do this. You just don’t know you know.
The Framework You Already Understand
If you’ve ever classified workloads by sensitivity level, public-facing versus internal, PCI-compliant versus non-regulated, production versus development, you already understand AI safety levels.
Anthropic, the company behind Claude, formalized this into something called the Responsible Scaling Policy (RSP). At its core is a tiered system called AI Safety Levels (ASL). If that sounds familiar, it should. It’s directly modeled on Biosafety Levels (BSL), the framework that governs how laboratories handle dangerous pathogens.
The parallel isn’t just conceptual. It’s structural.
| Biosafety Level | What You’re Handling | Containment Required |
|---|---|---|
| BSL-1 | Non-hazardous agents (E. coli K12) | Standard lab practices |
| BSL-2 | Moderate-risk agents (Staph, Hepatitis B) | Limited access, protective equipment |
| BSL-3 | Serious/lethal agents (TB, SARS, Anthrax) | Controlled access, HEPA filtration, negative pressure |
| BSL-4 | Highest-risk agents (Ebola, Marburg) | Full isolation, positive pressure suits, airlocks |
Here’s Anthropic’s equivalent:
| AI Safety Level | Capability Profile | Safeguards Required |
|---|---|---|
| ASL-1 | No meaningful catastrophic risk | Standard practices |
| ASL-2 | Early dangerous capability signs, not exceeding what’s findable via search | Harmlessness training, Constitutional AI, SOC 2 compliance |
| ASL-3 | Substantial increase in catastrophic misuse risk or meaningful autonomous capabilities | Defense against sophisticated attackers, multi-layer prevention, continuous capability evaluation |
| ASL-4+ | State-level threats, qualitative capability escalations | Nation-state adversary protection, potentially unsolved research problems |
The principle is identical: containment scales with capability. You don’t put Ebola in a BSL-2 lab. You don’t deploy a model that can autonomously write exploit code with the same controls as a simple FAQ chatbot.
Platform engineers intuitively understand this. It’s the same reason you don’t give a junior developer production database credentials on day one. It’s the same reason PCI workloads get different network policies than internal dashboards. Risk determines control.
One thing worth knowing as of early 2026: Anthropic just released RSP v3.0 (February 2026), and it’s a significant structural change. The framework now separates Anthropic’s unilateral commitments from industry-wide recommendations, mandates public Frontier Safety Roadmaps and Risk Reports every 3-6 months, and de-emphasizes rigid ASL level thresholds in favor of requiring documented analysis and arguments for safety decisions. The core tiered-risk logic still holds, but the RSP is now more of a continuous governance system than a set of hard capability gates.
Practically speaking: Claude Opus 4 (May 2025) was the first model to activate ASL-3 protections. All subsequent frontier Claude models operate under ASL-3. Smaller models remain at ASL-2. No production model has reached ASL-4.
What Triggers an Upgrade?
In biosafety, you don’t get to decide your containment level based on gut feel. There are specific criteria, pathogen characteristics, transmission routes, available treatments, that determine which level applies.
Anthropic does the same thing with defined capability thresholds that trigger escalation.
The CBRN Threshold: Can this model significantly help a non-expert create or deploy biological, chemical, radiological, or nuclear weapons? If yes, you’re at ASL-3 minimum.
The Autonomous AI R&D Threshold: Can this model automate entry-level AI research? Could it cause a 1000x increase in effective compute within a year? (The historical rate is about 35x per year.) If yes, ASL-3 minimum.
The Model Autonomy Checkpoint: Can this model autonomously complete software engineering tasks that would take a human 2-8 hours? That’s a warning sign for capabilities that compound.
And here’s the key operational detail: Anthropic triggers a mandatory capability assessment at a 4x increase in effective compute on risk-relevant domains, or every 6 months of accumulated post-training enhancements, whichever comes first.
As a platform engineer, this should feel familiar. You already have criteria that trigger security reviews. You already have thresholds that escalate to different approval processes. The concept is identical. The specific thresholds are new.
Constitutional AI: Policy-as-Code for Model Behavior
Here’s where it gets interesting for anyone who’s worked with OPA, Kyverno, or any policy-as-code framework.
Anthropic doesn’t just hope Claude behaves well. They train behavioral constraints directly into the model using something called Constitutional AI. And if you look at it carefully, it functions exactly like the admission controllers you’re already running in your clusters.
The constitution establishes a priority hierarchy:
- Broadly Safe (highest priority) — Never undermine human oversight mechanisms
- Broadly Ethical — Act according to good values, avoid harmful actions
- Compliant with Anthropic’s Guidelines — Follow organizational policies
- Genuinely Helpful (lowest priority, the default when no conflicts exist) — Actually be useful
This is policy precedence. When rules conflict, higher-priority rules win. Same as how your Kyverno ClusterPolicies have enforcement hierarchies.
And just like your admission controllers, there are hard constraints, absolute deny rules that always block regardless of any other configuration: never assist with bioweapons, never help concentrate illegitimate power, never undermine oversight of AI systems.
If you’ve ever written a Kyverno policy that blocks privileged containers regardless of any other setting, you understand hard constraints.
An important update for 2026: Anthropic published a completely redesigned constitution in January 2026. The original was a list of standalone rules. The new version is a holistic, explanatory document that provides reasons alongside rules, distinguishes between hardcoded absolute prohibitions and adjustable defaults, and is addressed directly to Claude itself. It’s licensed under CC0 (public domain). You can read it in full at anthropic.com.
The Training Process Is a Webhook Chain
The technical implementation of Constitutional AI maps directly to admission controller patterns.
Phase 1 works like a series of validating and mutating webhooks:
- Generate a response to a potentially harmful prompt
- Randomly sample a constitutional principle
- Evaluate the response against that principle (validating webhook)
- Revise the response to better comply (mutating webhook)
- Repeat 2-4 times with different principles
- Fine-tune on the final, revised responses
Phase 2 is reinforcement learning, but the key insight is where the feedback comes from. For harmlessness evaluations, they use AI feedback (RLAIF). For helpfulness evaluations, they still use human feedback. Why? Because harmful responses are more consistently identifiable. “Is this response dangerous?” has clearer answers than “Is this response truly helpful?” The system acknowledges its own limitations.
This matters for platform engineers because it shows that even in AI training, you need layered evaluation: different checks for different risk categories, with human oversight where automation can’t be trusted.
Constitutional Classifiers: When Admission Controllers Go Into Production
Here’s the part that should genuinely excite platform engineers: Anthropic took the Constitutional AI principles and built runtime admission controllers out of them.
Constitutional Classifiers (January 2025) are input/output classifiers trained on synthetic data generated from constitutional rules. They function as ASL-3 deployment safeguards, screening prompts and responses at inference time. The v1 system withstood 3,000+ hours of red-teaming with 23.7% computational overhead and a 0.38% false positive rate.
Then they made it dramatically better. Constitutional Classifiers++ (January 2026) introduced a two-stage architecture: a lightweight probe on Claude’s internal activations screens all traffic, then escalates suspicious exchanges to a more powerful classifier. The result: roughly 1% additional compute overhead (down from 23.7%) with the lowest successful attack rate Anthropic has ever measured. No universal jailbreak has been discovered against it.
Think about what that is in infrastructure terms: a lightweight sidecar that checks activations rather than just text, escalating to a heavier classifier only when needed. It’s an async guardrail pattern with early exit, optimized for production latency. This is the same architecture you’d design for any high-throughput security gate.
The analogy in the article title isn’t just pedagogically convenient anymore. These things are literally admission controllers.
What This Actually Means for Your Monday Morning
Theory is great. Here’s what you actually do when your team is deploying AI workloads.
Scenario 1: Deploying an LLM-Powered Service
Your team needs to deploy a customer-facing chatbot or internal AI service.
Classify the workload first. What can this model actually do? What’s the worst-case misuse? An internal HR FAQ bot has a different risk profile than a code generation service that can write infrastructure. Then implement proportional controls. Don’t over-engineer low-risk deployments, and don’t under-engineer high-risk ones. Deploy guardrails as infrastructure, not application logic: input validation, output filtering, PII redaction as sidecar containers. Configure network policies, restrict egress, implement rate limiting, and log every interaction. Treat the model as a service with external attack surface.
For tooling: AWS Bedrock Guardrails, Azure AI Content Safety, NVIDIA NeMo Guardrails, and Guardrails AI (open-source) all provide varying levels of runtime protection. Meta’s LlamaFirewall bundles PromptGuard, Agent Alignment Checks, and CodeShield into a single orchestration layer if you’re self-hosting.
Scenario 2: AI Coding Assistants in Your Pipeline
Your developers are using Copilot, Cursor, or Claude Code. This scenario has gotten significantly more serious since late 2024.
In December 2025, a researcher disclosed 30+ vulnerabilities across every major AI IDE, including Cursor, Copilot, Windsurf, and Zed, with 24 CVEs assigned. That same month, a research firm tested 100+ LLMs on code generation tasks and found 45% of AI-generated code contains security flaws, with no improvement from newer or larger models. CodeRabbit analysis of real pull requests found AI co-authored code had 2.74x more security vulnerabilities than human-written code.
The practical response: treat AI as an untrusted input source. All AI-generated code gets the same scrutiny as external dependencies. Add SAST before merge, dependency scanning for AI-suggested packages, and secret detection. No AI-generated code merges without human review. That’s not a performance concern, it’s a security requirement.
One more thing worth flagging: PromptPwnd (December 2025) was the first confirmed real-world demonstration that prompt injection can compromise CI/CD pipelines. Untrusted user input in issue titles and PR descriptions was injected into AI agent prompts, which then executed privileged tools and leaked secrets. At least five Fortune 500 companies were confirmed vulnerable before the pattern was documented. Google’s own Gemini CLI repository was affected. This is not a theoretical risk category anymore.
Scenario 3: AI Agents with Infrastructure Access
An AI agent needs to create resources, adjust configurations, or respond to incidents autonomously. This is the highest-risk scenario, and the one where the “you already know this” framing matters most.
Simon Willison, one of the most rigorous practitioner voices on AI security, calls it the Lethal Trifecta: any agent that simultaneously has access to private data, exposure to untrusted content, and ability to communicate externally is a catastrophic attack surface. Most production agent deployments hit all three. The attacker doesn’t need to compromise your infrastructure directly. They just need to get malicious instructions into any content your agent will read.
The mitigation framework maps directly to zero-trust principles you already know:
Each agent gets minimal permissions scoped to specific resources, no wildcards, no implicit trust. Use Just-in-Time permissions for high-impact operations. Human-in-the-loop before consequential actions: the agent suggests, a human approves. Sandbox execution in isolated environments. Google’s GKE Agent Sandbox (currently in community preview under Kubernetes SIG Apps) provides kernel-level isolation via gVisor specifically for this use case. Log every agent decision, tool invocation, and outcome with signed audit trails. Define kill switches that are non-negotiable and physically isolated from agent control.
The OWASP Top 10 for Agentic Applications 2026 (December 2025) formalizes this with a principle called Least Agency: minimize autonomy, not just access. It’s beyond least privilege. A properly privileged agent can still cause enormous damage if it’s operating autonomously when it shouldn’t be. Least Agency means always asking whether the agent needs to take this action autonomously, or whether human confirmation is the right default.
The Gravitee State of AI Agent Security 2026 survey found that 80.9% of technical teams have agents in testing or production, but only 14.4% deployed with full security approval. 7.1% use no authentication at all for upstream agent connections. The adoption-security gap is measurable and large.
The Numbers That Should Focus Your Attention
- Of the 13% of organizations that experienced AI-specific security breaches, 97% lacked proper AI access controls (IBM Cost of a Data Breach Report, July 2025)
- 45% of AI-generated code contains security flaws, with no improvement from larger or newer models (Veracode GenAI Code Security Report, 2025)
- 80.9% of technical teams have AI agents in testing or production, but only 14.4% deployed with full security approval (Gravitee State of AI Agent Security 2026)
- 59 AI-related federal agency regulations were introduced in 2024, double the prior year, while over 1,080 AI bills were introduced across US state legislatures in 2025 (Stanford HAI AI Index 2025)
- The AI guardrails platform market is currently valued at $2.5B and projected to reach $7.29B by 2030
This isn’t theoretical risk. It’s operational reality arriving faster than most organizations are preparing for it.
What the Labs' Own Safety Frameworks Tell You About Vendor Risk
One thing platform engineers should understand when evaluating AI vendor relationships: the major labs all publish safety frameworks, and reading them tells you something real about how they think about risk management.
Anthropic’s RSP is now on v3.0. Google DeepMind’s Frontier Safety Framework reached v3.0 in September 2025, with a broader scope that covers manipulation and misalignment risks alongside catastrophic misuse. OpenAI’s Preparedness Framework v2.0 (April 2025) introduced a controversial clause allowing OpenAI to “adjust” safeguards if a rival lab releases high-risk systems, and their Mission Alignment team was disbanded in February 2026 after 16 months. Meta doesn’t publish a policy framework but releases open-source tools: LlamaFirewall, Llama Guard 4, PromptGuard 2, and CodeShield are all available and production-ready.
For vendor risk assessment, the question isn’t “do they have a safety policy” — everyone does now. The question is what’s in it and whether the governance is real or performative. The International AI Safety Report 2026, produced by 100+ experts from 30+ countries, concluded that no single AI safeguard is reliable on its own and recommended defense-in-depth. That’s infrastructure thinking applied to AI risk, and it validates why platform engineers are better positioned to lead AI governance than most of the people currently holding those roles.
The Regulatory Landscape in March 2026
The regulatory picture has shifted significantly in the last 12 months.
The EU AI Act is in partial effect. Prohibitions on unacceptable-risk AI systems have been in force since February 2025. Rules for General-Purpose AI models took effect August 2025. The high-risk provisions that originally targeted August 2026 are now facing a potential 16-month delay under the EU Digital Omnibus proposal (November 2025), pushing the new target to December 2027. EU member states have issued roughly 250 million euros in fines so far, primarily for GPAI non-compliance.
In the US, Biden’s AI Executive Order 14110 was revoked on January 20, 2025. The current administration issued a replacement order focused on “removing barriers to AI leadership” and directed a review of state AI laws deemed onerous. No comprehensive federal AI law has passed. The most practically relevant developments for platform engineers are NIST-side: NIST IR 8596 (Cybersecurity Framework Profile for AI, December 2025 draft), the NIST AI Agent Standards Initiative (February 2026), and SP 800-53 Release 5.2.0 which added AI-specific security controls. ISO/IEC 42001:2023, the world’s first certifiable AI management system standard, is worth knowing — 76% of organizations surveyed by the Cloud Security Alliance plan to pursue certification.
If you’re wondering which frameworks to prioritize: OWASP LLM Top 10 2025 gives immediate security value, MITRE ATLAS (updated October 2025 for AI agents) gives threat modeling vocabulary your security teams already speak, and NIST AI RMF maps to governance structures most enterprises already use.
The MCP Problem Nobody’s Talking About Enough
If your team builds AI agents using the Model Context Protocol, this section is not optional.
MCP had a serious year in 2025 and into 2026. Documented incidents include a WhatsApp MCP server that silently exfiltrated full chat history via tool poisoning (April 2025), a GitHub MCP vulnerability that pulled data from private repos and leaked it into a public pull request (May 2025), and Anthropic’s own MCP Inspector getting a CVE for unauthenticated remote code execution (June 2025). A critical command injection vulnerability in mcp-remote (CVSS 9.6, July 2025) affected 437,000+ downloads and was used by Cloudflare, Hugging Face, and Auth0. By October 2025, a Smithery hosting breach leaked a Fly.io API token with control over 3,000+ applications.
The attack patterns are distinct from traditional web vulnerabilities. Tool poisoning embeds hidden instructions in tool metadata. Rug pulls allow tools to silently redefine their behavior between sessions. Cross-server tool shadowing lets one MCP server override the behavior of another. A benchmark study found that o1-mini showed a 72.8% success rate against tool poisoning attacks.
The OWASP Secure MCP Server Development guide (February 2026) and the Coalition for Secure AI’s MCP Security whitepaper cover mitigation in detail. Red Hat published MCP security controls guidance. None of this existed a year ago. The tooling is catching up to the threat surface, but you need to know the threat surface exists.
The minimum viable MCP security posture: implement human confirmation before any privileged tool execution, scope MCP server permissions explicitly (no wildcards), log all tool invocations and their parameters, and treat all MCP server output as untrusted before acting on it.
You Already Know This
The first time I saw Anthropic’s Responsible Scaling Policy, I didn’t see a revolutionary new framework. I saw tiered security controls, policy enforcement, continuous evaluation, defense in depth, and governance-as-code. I saw the same patterns I’d been implementing for infrastructure my entire career.
The specifics are new. The principles aren’t.
AI safety isn’t a departure from what platform engineers already do. It’s an extension of it. The same skills that make you good at securing infrastructure make you good at governing AI workloads. The same instincts that tell you “this deployment needs more controls” apply directly to model capabilities.
The field has accelerated this point. Constitutional Classifiers now function as literal admission controllers. OWASP published an Agentic Top 10 with a “Least Agency” principle that maps directly to least privilege. GKE Agent Sandbox brings container isolation semantics to AI agent execution. The International AI Safety Report calls for defense-in-depth. The tools are speaking your language because the problems are the same problems.
The Gravitee data says 80.9% of teams are building with AI agents. Only 14.4% are securing them properly. That gap is your professional opportunity and your responsibility.
You already know how to classify workloads by risk. You already know how to implement proportional controls. You already know how to enforce policies declaratively. You already know how to build defense in depth. You already know what least privilege means and why it matters.
You just need to recognize that AI safety is the same discipline, applied to a new domain. The question isn’t whether you’re qualified to lead AI governance in your organization. The question is whether you’ll step up before someone less qualified gets there first.
Michael Rishi Forrester is Principal Training Architect at KodeKloud and founder of The Performant Professionals. With 25+ years in operations and DevOps across Red Hat, ThoughtWorks, AWS, and beyond, he focuses on preparing tomorrow’s innovators while elevating the average.
Opinion: The Platform Engineer's Guide to AI Safety — You Already Know It. You Just Don't Know It Yet.
Your team just shipped an AI feature. Maybe it’s a chatbot for customer support. Maybe it’s a code assistant integrated into your CI/CD pipeline. Maybe it’s an agent that can spin up infrastructure based on natural language requests.
And somewhere in the back of your mind, you’re wondering: Is this safe? What does “safe” even mean here? And why does everyone talking about AI safety sound like they’re either preparing for the apocalypse or dismissing the whole thing as academic noise?
Here’s what 25 years in operations taught me, from Red Hat to ThoughtWorks to AWS: every “revolutionary” technology eventually reveals itself as a variation on problems we’ve already solved. Cloud computing was just “someone else’s computer” with better APIs. Kubernetes was just “distributed systems orchestration” with a steeper learning curve. And AI safety? It’s tiered security frameworks and policy-as-code wearing a new hat.
You already know how to do this. You just don’t know you know.
The Framework You Already Understand
If you’ve ever classified workloads by sensitivity level, public-facing versus internal, PCI-compliant versus non-regulated, production versus development, you already understand AI safety levels.
Anthropic, the company behind Claude, formalized this into something called the Responsible Scaling Policy (RSP). At its core is a tiered system called AI Safety Levels (ASL). If that sounds familiar, it should. It’s directly modeled on Biosafety Levels (BSL), the framework that governs how laboratories handle dangerous pathogens.
The parallel isn’t just conceptual. It’s structural.
| Biosafety Level | What You’re Handling | Containment Required |
|---|---|---|
| BSL-1 | Non-hazardous agents (E. coli K12) | Standard lab practices |
| BSL-2 | Moderate-risk agents (Staph, Hepatitis B) | Limited access, protective equipment |
| BSL-3 | Serious/lethal agents (TB, SARS, Anthrax) | Controlled access, HEPA filtration, negative pressure |
| BSL-4 | Highest-risk agents (Ebola, Marburg) | Full isolation, positive pressure suits, airlocks |
Here’s Anthropic’s equivalent:
| AI Safety Level | Capability Profile | Safeguards Required |
|---|---|---|
| ASL-1 | No meaningful catastrophic risk | Standard practices |
| ASL-2 | Early dangerous capability signs, not exceeding what’s findable via search | Harmlessness training, Constitutional AI, SOC 2 compliance |
| ASL-3 | Substantial increase in catastrophic misuse risk or meaningful autonomous capabilities | Defense against sophisticated attackers, multi-layer prevention, continuous capability evaluation |
| ASL-4+ | State-level threats, qualitative capability escalations | Nation-state adversary protection, potentially unsolved research problems |
The principle is identical: containment scales with capability. You don’t put Ebola in a BSL-2 lab. You don’t deploy a model that can autonomously write exploit code with the same controls as a simple FAQ chatbot.
Platform engineers intuitively understand this. It’s the same reason you don’t give a junior developer production database credentials on day one. It’s the same reason PCI workloads get different network policies than internal dashboards. Risk determines control.
One thing worth knowing as of early 2026: Anthropic just released RSP v3.0 (February 2026), and it’s a significant structural change. The framework now separates Anthropic’s unilateral commitments from industry-wide recommendations, mandates public Frontier Safety Roadmaps and Risk Reports every 3-6 months, and de-emphasizes rigid ASL level thresholds in favor of requiring documented analysis and arguments for safety decisions. The core tiered-risk logic still holds, but the RSP is now more of a continuous governance system than a set of hard capability gates.
Practically speaking: Claude Opus 4 (May 2025) was the first model to activate ASL-3 protections. All subsequent frontier Claude models operate under ASL-3. Smaller models remain at ASL-2. No production model has reached ASL-4.
What Triggers an Upgrade?
In biosafety, you don’t get to decide your containment level based on gut feel. There are specific criteria, pathogen characteristics, transmission routes, available treatments, that determine which level applies.
Anthropic does the same thing with defined capability thresholds that trigger escalation.
The CBRN Threshold: Can this model significantly help a non-expert create or deploy biological, chemical, radiological, or nuclear weapons? If yes, you’re at ASL-3 minimum.
The Autonomous AI R&D Threshold: Can this model automate entry-level AI research? Could it cause a 1000x increase in effective compute within a year? (The historical rate is about 35x per year.) If yes, ASL-3 minimum.
The Model Autonomy Checkpoint: Can this model autonomously complete software engineering tasks that would take a human 2-8 hours? That’s a warning sign for capabilities that compound.
And here’s the key operational detail: Anthropic triggers a mandatory capability assessment at a 4x increase in effective compute on risk-relevant domains, or every 6 months of accumulated post-training enhancements, whichever comes first.
As a platform engineer, this should feel familiar. You already have criteria that trigger security reviews. You already have thresholds that escalate to different approval processes. The concept is identical. The specific thresholds are new.
Constitutional AI: Policy-as-Code for Model Behavior
Here’s where it gets interesting for anyone who’s worked with OPA, Kyverno, or any policy-as-code framework.
Anthropic doesn’t just hope Claude behaves well. They train behavioral constraints directly into the model using something called Constitutional AI. And if you look at it carefully, it functions exactly like the admission controllers you’re already running in your clusters.
The constitution establishes a priority hierarchy:
- Broadly Safe (highest priority) — Never undermine human oversight mechanisms
- Broadly Ethical — Act according to good values, avoid harmful actions
- Compliant with Anthropic’s Guidelines — Follow organizational policies
- Genuinely Helpful (lowest priority, the default when no conflicts exist) — Actually be useful
This is policy precedence. When rules conflict, higher-priority rules win. Same as how your Kyverno ClusterPolicies have enforcement hierarchies.
And just like your admission controllers, there are hard constraints, absolute deny rules that always block regardless of any other configuration: never assist with bioweapons, never help concentrate illegitimate power, never undermine oversight of AI systems.
If you’ve ever written a Kyverno policy that blocks privileged containers regardless of any other setting, you understand hard constraints.
An important update for 2026: Anthropic published a completely redesigned constitution in January 2026. The original was a list of standalone rules. The new version is a holistic, explanatory document that provides reasons alongside rules, distinguishes between hardcoded absolute prohibitions and adjustable defaults, and is addressed directly to Claude itself. It’s licensed under CC0 (public domain). You can read it in full at anthropic.com.
The Training Process Is a Webhook Chain
The technical implementation of Constitutional AI maps directly to admission controller patterns.
Phase 1 works like a series of validating and mutating webhooks:
- Generate a response to a potentially harmful prompt
- Randomly sample a constitutional principle
- Evaluate the response against that principle (validating webhook)
- Revise the response to better comply (mutating webhook)
- Repeat 2-4 times with different principles
- Fine-tune on the final, revised responses
Phase 2 is reinforcement learning, but the key insight is where the feedback comes from. For harmlessness evaluations, they use AI feedback (RLAIF). For helpfulness evaluations, they still use human feedback. Why? Because harmful responses are more consistently identifiable. “Is this response dangerous?” has clearer answers than “Is this response truly helpful?” The system acknowledges its own limitations.
This matters for platform engineers because it shows that even in AI training, you need layered evaluation: different checks for different risk categories, with human oversight where automation can’t be trusted.
Constitutional Classifiers: When Admission Controllers Go Into Production
Here’s the part that should genuinely excite platform engineers: Anthropic took the Constitutional AI principles and built runtime admission controllers out of them.
Constitutional Classifiers (January 2025) are input/output classifiers trained on synthetic data generated from constitutional rules. They function as ASL-3 deployment safeguards, screening prompts and responses at inference time. The v1 system withstood 3,000+ hours of red-teaming with 23.7% computational overhead and a 0.38% false positive rate.
Then they made it dramatically better. Constitutional Classifiers++ (January 2026) introduced a two-stage architecture: a lightweight probe on Claude’s internal activations screens all traffic, then escalates suspicious exchanges to a more powerful classifier. The result: roughly 1% additional compute overhead (down from 23.7%) with the lowest successful attack rate Anthropic has ever measured. No universal jailbreak has been discovered against it.
Think about what that is in infrastructure terms: a lightweight sidecar that checks activations rather than just text, escalating to a heavier classifier only when needed. It’s an async guardrail pattern with early exit, optimized for production latency. This is the same architecture you’d design for any high-throughput security gate.
The analogy in the article title isn’t just pedagogically convenient anymore. These things are literally admission controllers.
What This Actually Means for Your Monday Morning
Theory is great. Here’s what you actually do when your team is deploying AI workloads.
Scenario 1: Deploying an LLM-Powered Service
Your team needs to deploy a customer-facing chatbot or internal AI service.
Classify the workload first. What can this model actually do? What’s the worst-case misuse? An internal HR FAQ bot has a different risk profile than a code generation service that can write infrastructure. Then implement proportional controls. Don’t over-engineer low-risk deployments, and don’t under-engineer high-risk ones. Deploy guardrails as infrastructure, not application logic: input validation, output filtering, PII redaction as sidecar containers. Configure network policies, restrict egress, implement rate limiting, and log every interaction. Treat the model as a service with external attack surface.
For tooling: AWS Bedrock Guardrails, Azure AI Content Safety, NVIDIA NeMo Guardrails, and Guardrails AI (open-source) all provide varying levels of runtime protection. Meta’s LlamaFirewall bundles PromptGuard, Agent Alignment Checks, and CodeShield into a single orchestration layer if you’re self-hosting.
Scenario 2: AI Coding Assistants in Your Pipeline
Your developers are using Copilot, Cursor, or Claude Code. This scenario has gotten significantly more serious since late 2024.
In December 2025, a researcher disclosed 30+ vulnerabilities across every major AI IDE, including Cursor, Copilot, Windsurf, and Zed, with 24 CVEs assigned. That same month, a research firm tested 100+ LLMs on code generation tasks and found 45% of AI-generated code contains security flaws, with no improvement from newer or larger models. CodeRabbit analysis of real pull requests found AI co-authored code had 2.74x more security vulnerabilities than human-written code.
The practical response: treat AI as an untrusted input source. All AI-generated code gets the same scrutiny as external dependencies. Add SAST before merge, dependency scanning for AI-suggested packages, and secret detection. No AI-generated code merges without human review. That’s not a performance concern, it’s a security requirement.
One more thing worth flagging: PromptPwnd (December 2025) was the first confirmed real-world demonstration that prompt injection can compromise CI/CD pipelines. Untrusted user input in issue titles and PR descriptions was injected into AI agent prompts, which then executed privileged tools and leaked secrets. At least five Fortune 500 companies were confirmed vulnerable before the pattern was documented. Google’s own Gemini CLI repository was affected. This is not a theoretical risk category anymore.
Scenario 3: AI Agents with Infrastructure Access
An AI agent needs to create resources, adjust configurations, or respond to incidents autonomously. This is the highest-risk scenario, and the one where the “you already know this” framing matters most.
Simon Willison, one of the most rigorous practitioner voices on AI security, calls it the Lethal Trifecta: any agent that simultaneously has access to private data, exposure to untrusted content, and ability to communicate externally is a catastrophic attack surface. Most production agent deployments hit all three. The attacker doesn’t need to compromise your infrastructure directly. They just need to get malicious instructions into any content your agent will read.
The mitigation framework maps directly to zero-trust principles you already know:
Each agent gets minimal permissions scoped to specific resources, no wildcards, no implicit trust. Use Just-in-Time permissions for high-impact operations. Human-in-the-loop before consequential actions: the agent suggests, a human approves. Sandbox execution in isolated environments. Google’s GKE Agent Sandbox (currently in community preview under Kubernetes SIG Apps) provides kernel-level isolation via gVisor specifically for this use case. Log every agent decision, tool invocation, and outcome with signed audit trails. Define kill switches that are non-negotiable and physically isolated from agent control.
The OWASP Top 10 for Agentic Applications 2026 (December 2025) formalizes this with a principle called Least Agency: minimize autonomy, not just access. It’s beyond least privilege. A properly privileged agent can still cause enormous damage if it’s operating autonomously when it shouldn’t be. Least Agency means always asking whether the agent needs to take this action autonomously, or whether human confirmation is the right default.
The Gravitee State of AI Agent Security 2026 survey found that 80.9% of technical teams have agents in testing or production, but only 14.4% deployed with full security approval. 7.1% use no authentication at all for upstream agent connections. The adoption-security gap is measurable and large.
The Numbers That Should Focus Your Attention
- Of the 13% of organizations that experienced AI-specific security breaches, 97% lacked proper AI access controls (IBM Cost of a Data Breach Report, July 2025)
- 45% of AI-generated code contains security flaws, with no improvement from larger or newer models (Veracode GenAI Code Security Report, 2025)
- 80.9% of technical teams have AI agents in testing or production, but only 14.4% deployed with full security approval (Gravitee State of AI Agent Security 2026)
- 59 AI-related federal agency regulations were introduced in 2024, double the prior year, while over 1,080 AI bills were introduced across US state legislatures in 2025 (Stanford HAI AI Index 2025)
- The AI guardrails platform market is currently valued at $2.5B and projected to reach $7.29B by 2030
This isn’t theoretical risk. It’s operational reality arriving faster than most organizations are preparing for it.
What the Labs' Own Safety Frameworks Tell You About Vendor Risk
One thing platform engineers should understand when evaluating AI vendor relationships: the major labs all publish safety frameworks, and reading them tells you something real about how they think about risk management.
Anthropic’s RSP is now on v3.0. Google DeepMind’s Frontier Safety Framework reached v3.0 in September 2025, with a broader scope that covers manipulation and misalignment risks alongside catastrophic misuse. OpenAI’s Preparedness Framework v2.0 (April 2025) introduced a controversial clause allowing OpenAI to “adjust” safeguards if a rival lab releases high-risk systems, and their Mission Alignment team was disbanded in February 2026 after 16 months. Meta doesn’t publish a policy framework but releases open-source tools: LlamaFirewall, Llama Guard 4, PromptGuard 2, and CodeShield are all available and production-ready.
For vendor risk assessment, the question isn’t “do they have a safety policy” — everyone does now. The question is what’s in it and whether the governance is real or performative. The International AI Safety Report 2026, produced by 100+ experts from 30+ countries, concluded that no single AI safeguard is reliable on its own and recommended defense-in-depth. That’s infrastructure thinking applied to AI risk, and it validates why platform engineers are better positioned to lead AI governance than most of the people currently holding those roles.
The Regulatory Landscape in March 2026
The regulatory picture has shifted significantly in the last 12 months.
The EU AI Act is in partial effect. Prohibitions on unacceptable-risk AI systems have been in force since February 2025. Rules for General-Purpose AI models took effect August 2025. The high-risk provisions that originally targeted August 2026 are now facing a potential 16-month delay under the EU Digital Omnibus proposal (November 2025), pushing the new target to December 2027. EU member states have issued roughly 250 million euros in fines so far, primarily for GPAI non-compliance.
In the US, Biden’s AI Executive Order 14110 was revoked on January 20, 2025. The current administration issued a replacement order focused on “removing barriers to AI leadership” and directed a review of state AI laws deemed onerous. No comprehensive federal AI law has passed. The most practically relevant developments for platform engineers are NIST-side: NIST IR 8596 (Cybersecurity Framework Profile for AI, December 2025 draft), the NIST AI Agent Standards Initiative (February 2026), and SP 800-53 Release 5.2.0 which added AI-specific security controls. ISO/IEC 42001:2023, the world’s first certifiable AI management system standard, is worth knowing — 76% of organizations surveyed by the Cloud Security Alliance plan to pursue certification.
If you’re wondering which frameworks to prioritize: OWASP LLM Top 10 2025 gives immediate security value, MITRE ATLAS (updated October 2025 for AI agents) gives threat modeling vocabulary your security teams already speak, and NIST AI RMF maps to governance structures most enterprises already use.
The MCP Problem Nobody’s Talking About Enough
If your team builds AI agents using the Model Context Protocol, this section is not optional.
MCP had a serious year in 2025 and into 2026. Documented incidents include a WhatsApp MCP server that silently exfiltrated full chat history via tool poisoning (April 2025), a GitHub MCP vulnerability that pulled data from private repos and leaked it into a public pull request (May 2025), and Anthropic’s own MCP Inspector getting a CVE for unauthenticated remote code execution (June 2025). A critical command injection vulnerability in mcp-remote (CVSS 9.6, July 2025) affected 437,000+ downloads and was used by Cloudflare, Hugging Face, and Auth0. By October 2025, a Smithery hosting breach leaked a Fly.io API token with control over 3,000+ applications.
The attack patterns are distinct from traditional web vulnerabilities. Tool poisoning embeds hidden instructions in tool metadata. Rug pulls allow tools to silently redefine their behavior between sessions. Cross-server tool shadowing lets one MCP server override the behavior of another. A benchmark study found that o1-mini showed a 72.8% success rate against tool poisoning attacks.
The OWASP Secure MCP Server Development guide (February 2026) and the Coalition for Secure AI’s MCP Security whitepaper cover mitigation in detail. Red Hat published MCP security controls guidance. None of this existed a year ago. The tooling is catching up to the threat surface, but you need to know the threat surface exists.
The minimum viable MCP security posture: implement human confirmation before any privileged tool execution, scope MCP server permissions explicitly (no wildcards), log all tool invocations and their parameters, and treat all MCP server output as untrusted before acting on it.
You Already Know This
The first time I saw Anthropic’s Responsible Scaling Policy, I didn’t see a revolutionary new framework. I saw tiered security controls, policy enforcement, continuous evaluation, defense in depth, and governance-as-code. I saw the same patterns I’d been implementing for infrastructure my entire career.
The specifics are new. The principles aren’t.
AI safety isn’t a departure from what platform engineers already do. It’s an extension of it. The same skills that make you good at securing infrastructure make you good at governing AI workloads. The same instincts that tell you “this deployment needs more controls” apply directly to model capabilities.
The field has accelerated this point. Constitutional Classifiers now function as literal admission controllers. OWASP published an Agentic Top 10 with a “Least Agency” principle that maps directly to least privilege. GKE Agent Sandbox brings container isolation semantics to AI agent execution. The International AI Safety Report calls for defense-in-depth. The tools are speaking your language because the problems are the same problems.
The Gravitee data says 80.9% of teams are building with AI agents. Only 14.4% are securing them properly. That gap is your professional opportunity and your responsibility.
You already know how to classify workloads by risk. You already know how to implement proportional controls. You already know how to enforce policies declaratively. You already know how to build defense in depth. You already know what least privilege means and why it matters.
You just need to recognize that AI safety is the same discipline, applied to a new domain. The question isn’t whether you’re qualified to lead AI governance in your organization. The question is whether you’ll step up before someone less qualified gets there first.
Michael Rishi Forrester is Principal Training Architect at KodeKloud and founder of The Performant Professionals. With 25+ years in operations and DevOps across Red Hat, ThoughtWorks, AWS, and beyond, he focuses on preparing tomorrow’s innovators while elevating the average.
Deep Dive: The Implementation Layer Is Dissolving
One Year In, the Shift Is No Longer a Prediction
March 2026
A year ago, I stood in front of a room full of founders and told them a command-line tool would change everything about how we think about building software. I showed them Claude Code. A few of them were excited. Most were polite. Some were clearly thinking I was overselling it.
Last week, one of those founders texted me. Her three-person startup just shipped a product that would have taken a team of twelve in 2023. She didn’t hire more engineers. She got better at telling agents exactly what to build.
In March 2025, I wrote that the implementation layer was dissolving. That the bottleneck was shifting from writing code to articulating what you actually want. I called it a “specification renaissance.”
I was right about the direction. I was wrong about the speed.
The Priesthood Falls (Again)
The history hasn’t changed, so I’ll keep this brief for anyone who read the original.
In 1957, IBM released FORTRAN. For the first time, programmers could write AREA = 3.14159 * RADIUS ** 2 instead of dozens of machine code instructions. John Backus, the creator, later described the culture he was challenging: a “priesthood of programming” who regarded themselves as “guardians of mysteries far too complex for ordinary mortals.”
Grace Hopper pushed compilers against resistance from people who thought automatic programming was crazy. Colleagues worried it would make programmers obsolete.
They were right to be nervous. They were wrong about the outcome. High-level languages created an explosion in demand for programmers. The skills that made you valuable as an assembly programmer became irrelevant to 99% of the work. New skills became essential.
The priests didn’t disappear. They transformed.
That same pattern is playing out right now. Except this time, we have the receipts.
From “Gradually” to “Suddenly”
In the original piece, I quoted the old line about change happening “gradually, then suddenly.” I told those founders we were in the “gradually” phase.
We’re not anymore.
In 2025, GitHub saw 43 million pull requests merged per month, a 23% increase year over year. Annual commits pushed jumped 25% to nearly one billion. Roughly 85% of developers now use AI coding tools on a regular basis, and around 46% of all code written by active developers comes from AI. Those aren’t projections. Those are actuals.
Stripe’s internal AI agents produce over 1,000 merged pull requests every week. TELUS saved more than 500,000 hours using AI-driven development across 13,000 internal solutions. Zapier hit 89% AI adoption across their entire organization. This isn’t a pilot program or a handful of early adopters running experiments. This is how software gets built now at companies that are paying attention.
A year ago I said the teams achieving 10x productivity gains were still outliers. Some of them still are. But 2x to 5x gains are now common among engineers who have figured out how to work with agents properly. The key word there is “properly,” and I’ll come back to that.
The Vibe Coding Arc
Something happened this past year that I didn’t predict, mostly because the term didn’t exist yet when I wrote the original piece.
In February 2025, Andrej Karpathy posted a tweet describing what he called “vibe coding.” You give in to the vibes, embrace the output, forget that the code even exists. It was a throwaway thought. A shower-time tweet. It became the dominant frame for AI-assisted development for an entire year.
Then people tried to ship vibe-coded software to production. Security vulnerabilities. Unmaintainable architecture. Accumulated technical debt from code nobody reviewed because nobody could read it. The industry learned a hard lesson: AI can generate code faster than any human, but speed without direction produces expensive garbage at scale.
Exactly one year after his original tweet, Karpathy posted again. This time he retired the term. In its place, he proposed “agentic engineering,” which he defined as the discipline where you’re not writing the code directly 99% of the time, you’re orchestrating agents who do, and acting as oversight. The word “engineering” was deliberate. Art, science, and professional skill. Something you can learn and get better at.
This arc, from vibe coding to agentic engineering, is exactly the transition I described in 2025 as the shift from implementation to articulation. The industry just needed a year of painful experience to validate it.
The Bottleneck Moved (Again)
Here’s what I said last year: the bottleneck shifted from implementation to articulation. The person who can clearly specify what they want now has more leverage than an entire team of developers who are fuzzy on the requirements.
That’s still true. But the bottleneck has already moved again, and it moved faster than I expected.
We now have three distinct bottlenecks that surface depending on the maturity of the team:
Specification is still the first wall most people hit. If you can’t describe what you want built with precision, agents will build you something that compiles, passes obvious tests, and solves the wrong problem. A Google Cloud PM recently shared a story about an intern who accomplished more in an afternoon with Claude Code than a senior engineer could do in three days. The difference wasn’t the tool. The intern was better at breaking down the problem into clear, verifiable subtasks.
Context is the wall that hits next. Prompt engineering was the buzzword in 2023 and 2024. In 2026, it’s been overtaken by context engineering. Anthropic published guidance on this, defining it as the discipline of curating and maintaining the optimal set of information during inference. ThoughtWorks calls it the biggest shift in developer experience this year. The idea is straightforward: the quality of an agent’s output depends less on how cleverly you phrase your request and more on what information the agent has access to when it works. Your codebase conventions, your architectural decisions, your team’s patterns, your domain constraints. If those things aren’t structured in a way agents can consume, it doesn’t matter how good the model gets.
Governance is the wall that hits at scale. When a single agent can produce a thousand PRs a week, you need automated security scanning, audit trails, quality gates, and clear ownership models. Manual code review can’t keep pace with agent-generated output. The organizations figuring this out are the ones who will scale. The ones who aren’t are building a mountain of technical debt they can’t see yet.
Context Engineering Is the New Skill
In the original article, I listed “context curation” as a becoming-essential skill. I buried it in a bullet point. That was a mistake. It deserved its own section, because it turned out to be one of the defining skill categories of 2026.
Here’s the short version. Large language models don’t learn your preferences through osmosis. They know exactly what’s in their context window and nothing else. Early on, people treated that as a limitation. Smart teams now treat it as a design surface.
Anthropic open-sourced the Agent Skills specification in late 2025. It’s been adopted by Claude Code, GitHub Copilot, Cursor, and OpenAI Codex. The idea is that instead of stuffing everything into a system prompt and hoping the agent pays attention, you structure your expertise as modular skills that get loaded on demand. Write a skill once, use it everywhere. The agent loads only what it needs for the current task, which keeps the context lean and the output consistent.
This matters because of something researchers call “context rot.” Million-token context windows sound impressive, but the transformer architecture forces every token to attend to every other token. At 4,000 tokens, that works great. At 400,000 tokens, performance degrades in predictable ways. Information in the middle gets lost. Attention spreads thin. The agent’s effective intelligence drops.
Context engineering solves this by being deliberate about what goes in. Your project’s CLAUDE.md file, your architectural rules, your team conventions, your skill definitions. These are the new artifacts that determine output quality. For platform engineers, this should feel familiar. We’ve spent careers building abstractions, writing documentation, creating self-service interfaces. Context engineering is that same work, aimed at a new consumer: the AI agent.
What Claude Code Actually Looks Like in 2026
A year ago, I described Claude Code as “an autonomous agent that can understand a problem description, break it into components, implement solutions across multiple files, run tests, fix bugs, and iterate.” That description is still technically accurate in the way that describing a car as “a box with wheels that moves” is technically accurate.
Here’s what the tool actually does now.
Claude Code runs agent teams. Not one agent working through a task sequentially, but multiple specialized agents working in parallel. You can have one agent analyzing code for reuse opportunities, another reviewing for quality issues, and a third checking for efficiency problems, all running simultaneously and reporting back. This isn’t a beta feature anymore. It shipped.
It has voice mode. You can speak your instructions instead of typing them. This sounds like a convenience feature until you realize how it changes the workflow. Describing architecture out loud, talking through constraints, explaining the problem context verbally while the agent works. Coding sessions now look less like typing at a terminal and more like directing a team.
It remembers things. Not in the way early AI tools pretended to remember by echoing back your last message, but through actual persistent memory across sessions. Architectural preferences, coding patterns, project context. The agent picks up where it left off.
It has a full 1M token context window on Opus 4.6. Skills that load on demand. Native IDE extensions for VS Code and JetBrains. Background agents that run while you do other work. A /loop command that executes recurring tasks on a schedule.
The tool I showed those founders in 2025 was a preview. What exists now is a development environment where the human’s job is to specify, direct, and verify while the agents handle execution.
Platform Engineers as Agent Architects
In the original piece, I called platform engineers “intent orchestrators.” I still like that framing, but it needs updating.
The reality in 2026 is that platform teams are becoming the people who design and maintain the systems that agents operate within. Not just infrastructure provisioning and CI/CD pipelines, but the guardrails, the quality gates, the context structures, and the governance frameworks that make agent-driven development safe at scale.
Think about it this way. When a single developer can deploy five parallel agents that each generate, test, and refine code autonomously, somebody needs to make sure those agents are working within the right constraints. Somebody needs to define what “safe self-service” looks like when the self-service consumer is an AI agent rather than a human developer. Somebody needs to build the automated security scanning that catches vulnerabilities at agent speed rather than human speed.
That somebody is the platform team.
The role isn’t less technical. If anything, it’s more technical. You need to understand how context windows work to design effective skills. You need to understand agent orchestration patterns to build useful development workflows. You need to understand security at a systems level to build governance that scales. This is the natural evolution of what platform engineering has always been, abstracting complexity and providing guardrails, but the consumer has changed.
The New Practitioners
Here’s something I didn’t cover in the original piece, and I should have.
The practitioners building with these tools are no longer limited to professional software engineers. A thoracic surgeon publicly shared that he learned to code through Claude Code, ran 67 autonomous agent sessions, and shipped a full-stack platform with a blog, analytics, and multi-agent orchestration. Not a toy app. A production system.
Product managers are building working prototypes to validate ideas before writing a single spec document. Researchers are building data analysis pipelines and frontend visualizations without waiting on engineering teams. Security teams are using agents to analyze unfamiliar codebases. Domain experts with deep knowledge of their field but no formal computer science training are building real tools that solve real problems.
This is the FORTRAN parallel playing out in real time. When compilers eliminated the need to think in machine language, the number of people who could build software exploded. The same thing is happening now. The barrier to entry hasn’t dropped to zero, you still need clear thinking, domain knowledge, and the discipline to verify outputs, but it has dropped far enough that the population of builders is expanding rapidly.
The bottleneck isn’t “can you code?” anymore. It’s “do you have taste?” Do you understand the problem well enough to know when the agent’s output is right and when it’s subtly wrong? Do you know what good looks like in your domain? Can you architect a solution before you try to build it?
The New Skill Stack (Updated)
Here’s the revised version of what I listed a year ago, adjusted for what we’ve actually learned.
Still essential, now proven: Systems thinking and architectural reasoning remain the foundation. If you can’t think about how components interact, agents will build you a collection of parts that don’t work together. Understanding tradeoffs between consistency and availability, speed and safety, simplicity and flexibility. These haven’t diminished in value at all. Domain expertise, meaning deep knowledge of the actual problem space, is more valuable than it’s ever been.
Now essential, no longer emerging: Clear written specification. Your specs are your code now, and the quality of your writing directly determines the quality of your output. Context engineering across rules, skills, memory, and project documentation. Multi-agent orchestration, understanding when to run agents in parallel versus sequentially, when to use specialized models for specific tasks, and how to coordinate handoffs between agents. Verification thinking, knowing how to validate that what the agent built actually solves your problem, not just that it compiles and passes tests.
Newly essential, barely discussed a year ago: Governance design. Building quality gates, security scanning, and audit trails that work at agent speed. Agent Skills authoring, structuring your team’s expertise as portable, reusable skill definitions that any agent can consume. Cognitive load management, being deliberate about what goes into context and what stays out, because more information doesn’t mean better results.
Diminished further than expected: Memorizing syntax and APIs was already declining. It’s now genuinely irrelevant for most work. Writing boilerplate code, generating test scaffolds, creating standard configurations. These are fully delegated tasks now. Manual code review for routine issues. Agents catch obvious problems faster than humans. The interesting review work, the architectural stuff, the subtle design implications, that’s still human territory.
What This Means for Education
I build training courses. I’ve been doing this for over twenty-five years. I’ve trained more than a million engineers through KodeKloud. And the model I’ve been using is fundamentally changing.
The old approach was: learn the syntax, practice the mechanics, build muscle memory, eventually understand the patterns. We front-loaded implementation skills because implementation was the bottleneck.
That model still produces people who can write code. It doesn’t produce people who can direct agents effectively. The gap between writing code yourself and specifying what an agent should build is real, and our training pipelines haven’t caught up.
The new model needs to front-load specification, context design, and verification. Teach people to think architecturally before they think syntactically. Teach them to write CLAUDE.md files and Agent Skills and structured constraints. Teach them to evaluate AI output critically, not just for correctness but for maintainability, security, and alignment with actual business requirements.
I’m not saying we abandon technical depth. The engineers who thrive in this environment are the ones who understand systems deeply enough to catch when an agent produces something that looks right but isn’t. But the sequence changes. Architecture first, then implementation details. Verification first, then generation. The ability to explain what you want clearly, before the ability to build it yourself.
The Cognitive Debt Problem
There’s a new failure mode that didn’t exist when I wrote the original piece. People are calling it “cognitive debt,” and it’s the accumulated cost of poorly managed AI interactions, context loss, and unreliable agent behavior over time.
Here’s how it shows up. A team adopts AI coding tools aggressively. Individual productivity metrics go up. Lines of code increase. Pull requests ship faster. But organizational delivery stays flat or actually degrades. Nobody fully understands the codebase anymore because significant portions were generated rather than written. Bugs in AI-generated code take longer to diagnose because the developer who “wrote” it doesn’t have the mental model of how it works. Knowledge that used to live in people’s heads now lives in agent context that gets lost between sessions.
This is the productivity paradox I mentioned in 2025, and it’s now well documented. The teams that avoid it are the ones who treat agent-generated code with the same rigor they’d apply to code from a new hire who’s fast but unfamiliar with the codebase. Review it. Understand it. Document why it exists. Build tests that capture the intent, not just the behavior.
Cognitive debt is what happens when you get the “vibe coding” speed without the “agentic engineering” discipline.
The Dissolution, One Year Later
Here’s the thing about layers dissolving: the layer doesn’t disappear. It becomes infrastructure. It becomes assumed. It becomes boring.
Assembly language didn’t vanish. Somewhere right now, someone is writing assembly for a device driver or a kernel module. But for 99% of programmers, it’s invisible. A layer they never touch.
In 2025, I said implementation wouldn’t vanish either. That prediction has held up. Code is still being written. It’s just increasingly written by agents operating under human direction. The layer hasn’t disappeared. It’s dropped below the line of what most practitioners need to think about directly.
What I underestimated was how fast the next layer would form on top. We’re already seeing the early signs of a world where you don’t just tell an agent what to build. You tell a team of agents, each with specialized skills, operating within governance frameworks, drawing from structured context, reporting through quality gates. The human isn’t just a specifier anymore. The human is the architect of the system that specifies.
It’s layers all the way up.
A year ago, I asked: “What are you going to build when the implementation layer dissolves?”
Some people answered that question. A surgeon built a platform. A three-person startup outshipped a team of twelve. Platform engineers became agent architects. Non-developers became builders.
The question for 2026 isn’t about whether the layer is dissolving. That’s settled. The question is whether you’re building the skills to work at the layer above it. Context engineering. Agent orchestration. Governance design. Specification as a discipline rather than an afterthought.
The implementation layer dissolved.
The orchestration layer is forming.
What are you going to build on top of it?
Michael Rishi Forrester is a Principal Training Architect and DevOps Advocate at KodeKloud, founder of The Performant Professionals, and has been preparing tomorrow’s innovators for over 25 years. He has trained more than 1 million engineers and focuses on helping technical professionals adapt through industry transformations.
Connect: @peopleforrester | linkedin.com/in/michaelrishiforrester | michaelrishiforrester.com
Tags: #AI #DevOps #PlatformEngineering #FutureOfWork #ClaudeCode #AgenticEngineering #ContextEngineering #SoftwareDevelopment #TechLeadership
Opinion: The Implementation Layer Is Dissolving
One Year In, the Shift Is No Longer a Prediction
March 2026
A year ago, I stood in front of a room full of founders and told them a command-line tool would change everything about how we think about building software. I showed them Claude Code. A few of them were excited. Most were polite. Some were clearly thinking I was overselling it.
Last week, one of those founders texted me. Her three-person startup just shipped a product that would have taken a team of twelve in 2023. She didn’t hire more engineers. She got better at telling agents exactly what to build.
In March 2025, I wrote that the implementation layer was dissolving. That the bottleneck was shifting from writing code to articulating what you actually want. I called it a “specification renaissance.”
I was right about the direction. I was wrong about the speed.
The Priesthood Falls (Again)
The history hasn’t changed, so I’ll keep this brief for anyone who read the original.
In 1957, IBM released FORTRAN. For the first time, programmers could write AREA = 3.14159 * RADIUS ** 2 instead of dozens of machine code instructions. John Backus, the creator, later described the culture he was challenging: a “priesthood of programming” who regarded themselves as “guardians of mysteries far too complex for ordinary mortals.”
Grace Hopper pushed compilers against resistance from people who thought automatic programming was crazy. Colleagues worried it would make programmers obsolete.
They were right to be nervous. They were wrong about the outcome. High-level languages created an explosion in demand for programmers. The skills that made you valuable as an assembly programmer became irrelevant to 99% of the work. New skills became essential.
The priests didn’t disappear. They transformed.
That same pattern is playing out right now. Except this time, we have the receipts.
From “Gradually” to “Suddenly”
In the original piece, I quoted the old line about change happening “gradually, then suddenly.” I told those founders we were in the “gradually” phase.
We’re not anymore.
In 2025, GitHub saw 43 million pull requests merged per month, a 23% increase year over year. Annual commits pushed jumped 25% to nearly one billion. Roughly 85% of developers now use AI coding tools on a regular basis, and around 46% of all code written by active developers comes from AI. Those aren’t projections. Those are actuals.
Stripe’s internal AI agents produce over 1,000 merged pull requests every week. TELUS saved more than 500,000 hours using AI-driven development across 13,000 internal solutions. Zapier hit 89% AI adoption across their entire organization. This isn’t a pilot program or a handful of early adopters running experiments. This is how software gets built now at companies that are paying attention.
A year ago I said the teams achieving 10x productivity gains were still outliers. Some of them still are. But 2x to 5x gains are now common among engineers who have figured out how to work with agents properly. The key word there is “properly,” and I’ll come back to that.
The Vibe Coding Arc
Something happened this past year that I didn’t predict, mostly because the term didn’t exist yet when I wrote the original piece.
In February 2025, Andrej Karpathy posted a tweet describing what he called “vibe coding.” You give in to the vibes, embrace the output, forget that the code even exists. It was a throwaway thought. A shower-time tweet. It became the dominant frame for AI-assisted development for an entire year.
Then people tried to ship vibe-coded software to production. Security vulnerabilities. Unmaintainable architecture. Accumulated technical debt from code nobody reviewed because nobody could read it. The industry learned a hard lesson: AI can generate code faster than any human, but speed without direction produces expensive garbage at scale.
Exactly one year after his original tweet, Karpathy posted again. This time he retired the term. In its place, he proposed “agentic engineering,” which he defined as the discipline where you’re not writing the code directly 99% of the time, you’re orchestrating agents who do, and acting as oversight. The word “engineering” was deliberate. Art, science, and professional skill. Something you can learn and get better at.
This arc, from vibe coding to agentic engineering, is exactly the transition I described in 2025 as the shift from implementation to articulation. The industry just needed a year of painful experience to validate it.
The Bottleneck Moved (Again)
Here’s what I said last year: the bottleneck shifted from implementation to articulation. The person who can clearly specify what they want now has more leverage than an entire team of developers who are fuzzy on the requirements.
That’s still true. But the bottleneck has already moved again, and it moved faster than I expected.
We now have three distinct bottlenecks that surface depending on the maturity of the team:
Specification is still the first wall most people hit. If you can’t describe what you want built with precision, agents will build you something that compiles, passes obvious tests, and solves the wrong problem. A Google Cloud PM recently shared a story about an intern who accomplished more in an afternoon with Claude Code than a senior engineer could do in three days. The difference wasn’t the tool. The intern was better at breaking down the problem into clear, verifiable subtasks.
Context is the wall that hits next. Prompt engineering was the buzzword in 2023 and 2024. In 2026, it’s been overtaken by context engineering. Anthropic published guidance on this, defining it as the discipline of curating and maintaining the optimal set of information during inference. ThoughtWorks calls it the biggest shift in developer experience this year. The idea is straightforward: the quality of an agent’s output depends less on how cleverly you phrase your request and more on what information the agent has access to when it works. Your codebase conventions, your architectural decisions, your team’s patterns, your domain constraints. If those things aren’t structured in a way agents can consume, it doesn’t matter how good the model gets.
Governance is the wall that hits at scale. When a single agent can produce a thousand PRs a week, you need automated security scanning, audit trails, quality gates, and clear ownership models. Manual code review can’t keep pace with agent-generated output. The organizations figuring this out are the ones who will scale. The ones who aren’t are building a mountain of technical debt they can’t see yet.
Context Engineering Is the New Skill
In the original article, I listed “context curation” as a becoming-essential skill. I buried it in a bullet point. That was a mistake. It deserved its own section, because it turned out to be one of the defining skill categories of 2026.
Here’s the short version. Large language models don’t learn your preferences through osmosis. They know exactly what’s in their context window and nothing else. Early on, people treated that as a limitation. Smart teams now treat it as a design surface.
Anthropic open-sourced the Agent Skills specification in late 2025. It’s been adopted by Claude Code, GitHub Copilot, Cursor, and OpenAI Codex. The idea is that instead of stuffing everything into a system prompt and hoping the agent pays attention, you structure your expertise as modular skills that get loaded on demand. Write a skill once, use it everywhere. The agent loads only what it needs for the current task, which keeps the context lean and the output consistent.
This matters because of something researchers call “context rot.” Million-token context windows sound impressive, but the transformer architecture forces every token to attend to every other token. At 4,000 tokens, that works great. At 400,000 tokens, performance degrades in predictable ways. Information in the middle gets lost. Attention spreads thin. The agent’s effective intelligence drops.
Context engineering solves this by being deliberate about what goes in. Your project’s CLAUDE.md file, your architectural rules, your team conventions, your skill definitions. These are the new artifacts that determine output quality. For platform engineers, this should feel familiar. We’ve spent careers building abstractions, writing documentation, creating self-service interfaces. Context engineering is that same work, aimed at a new consumer: the AI agent.
What Claude Code Actually Looks Like in 2026
A year ago, I described Claude Code as “an autonomous agent that can understand a problem description, break it into components, implement solutions across multiple files, run tests, fix bugs, and iterate.” That description is still technically accurate in the way that describing a car as “a box with wheels that moves” is technically accurate.
Here’s what the tool actually does now.
Claude Code runs agent teams. Not one agent working through a task sequentially, but multiple specialized agents working in parallel. You can have one agent analyzing code for reuse opportunities, another reviewing for quality issues, and a third checking for efficiency problems, all running simultaneously and reporting back. This isn’t a beta feature anymore. It shipped.
It has voice mode. You can speak your instructions instead of typing them. This sounds like a convenience feature until you realize how it changes the workflow. Describing architecture out loud, talking through constraints, explaining the problem context verbally while the agent works. Coding sessions now look less like typing at a terminal and more like directing a team.
It remembers things. Not in the way early AI tools pretended to remember by echoing back your last message, but through actual persistent memory across sessions. Architectural preferences, coding patterns, project context. The agent picks up where it left off.
It has a full 1M token context window on Opus 4.6. Skills that load on demand. Native IDE extensions for VS Code and JetBrains. Background agents that run while you do other work. A /loop command that executes recurring tasks on a schedule.
The tool I showed those founders in 2025 was a preview. What exists now is a development environment where the human’s job is to specify, direct, and verify while the agents handle execution.
Platform Engineers as Agent Architects
In the original piece, I called platform engineers “intent orchestrators.” I still like that framing, but it needs updating.
The reality in 2026 is that platform teams are becoming the people who design and maintain the systems that agents operate within. Not just infrastructure provisioning and CI/CD pipelines, but the guardrails, the quality gates, the context structures, and the governance frameworks that make agent-driven development safe at scale.
Think about it this way. When a single developer can deploy five parallel agents that each generate, test, and refine code autonomously, somebody needs to make sure those agents are working within the right constraints. Somebody needs to define what “safe self-service” looks like when the self-service consumer is an AI agent rather than a human developer. Somebody needs to build the automated security scanning that catches vulnerabilities at agent speed rather than human speed.
That somebody is the platform team.
The role isn’t less technical. If anything, it’s more technical. You need to understand how context windows work to design effective skills. You need to understand agent orchestration patterns to build useful development workflows. You need to understand security at a systems level to build governance that scales. This is the natural evolution of what platform engineering has always been, abstracting complexity and providing guardrails, but the consumer has changed.
The New Practitioners
Here’s something I didn’t cover in the original piece, and I should have.
The practitioners building with these tools are no longer limited to professional software engineers. A thoracic surgeon publicly shared that he learned to code through Claude Code, ran 67 autonomous agent sessions, and shipped a full-stack platform with a blog, analytics, and multi-agent orchestration. Not a toy app. A production system.
Product managers are building working prototypes to validate ideas before writing a single spec document. Researchers are building data analysis pipelines and frontend visualizations without waiting on engineering teams. Security teams are using agents to analyze unfamiliar codebases. Domain experts with deep knowledge of their field but no formal computer science training are building real tools that solve real problems.
This is the FORTRAN parallel playing out in real time. When compilers eliminated the need to think in machine language, the number of people who could build software exploded. The same thing is happening now. The barrier to entry hasn’t dropped to zero, you still need clear thinking, domain knowledge, and the discipline to verify outputs, but it has dropped far enough that the population of builders is expanding rapidly.
The bottleneck isn’t “can you code?” anymore. It’s “do you have taste?” Do you understand the problem well enough to know when the agent’s output is right and when it’s subtly wrong? Do you know what good looks like in your domain? Can you architect a solution before you try to build it?
The New Skill Stack (Updated)
Here’s the revised version of what I listed a year ago, adjusted for what we’ve actually learned.
Still essential, now proven: Systems thinking and architectural reasoning remain the foundation. If you can’t think about how components interact, agents will build you a collection of parts that don’t work together. Understanding tradeoffs between consistency and availability, speed and safety, simplicity and flexibility. These haven’t diminished in value at all. Domain expertise, meaning deep knowledge of the actual problem space, is more valuable than it’s ever been.
Now essential, no longer emerging: Clear written specification. Your specs are your code now, and the quality of your writing directly determines the quality of your output. Context engineering across rules, skills, memory, and project documentation. Multi-agent orchestration, understanding when to run agents in parallel versus sequentially, when to use specialized models for specific tasks, and how to coordinate handoffs between agents. Verification thinking, knowing how to validate that what the agent built actually solves your problem, not just that it compiles and passes tests.
Newly essential, barely discussed a year ago: Governance design. Building quality gates, security scanning, and audit trails that work at agent speed. Agent Skills authoring, structuring your team’s expertise as portable, reusable skill definitions that any agent can consume. Cognitive load management, being deliberate about what goes into context and what stays out, because more information doesn’t mean better results.
Diminished further than expected: Memorizing syntax and APIs was already declining. It’s now genuinely irrelevant for most work. Writing boilerplate code, generating test scaffolds, creating standard configurations. These are fully delegated tasks now. Manual code review for routine issues. Agents catch obvious problems faster than humans. The interesting review work, the architectural stuff, the subtle design implications, that’s still human territory.
What This Means for Education
I build training courses. I’ve been doing this for over twenty-five years. I’ve trained more than a million engineers through KodeKloud. And the model I’ve been using is fundamentally changing.
The old approach was: learn the syntax, practice the mechanics, build muscle memory, eventually understand the patterns. We front-loaded implementation skills because implementation was the bottleneck.
That model still produces people who can write code. It doesn’t produce people who can direct agents effectively. The gap between writing code yourself and specifying what an agent should build is real, and our training pipelines haven’t caught up.
The new model needs to front-load specification, context design, and verification. Teach people to think architecturally before they think syntactically. Teach them to write CLAUDE.md files and Agent Skills and structured constraints. Teach them to evaluate AI output critically, not just for correctness but for maintainability, security, and alignment with actual business requirements.
I’m not saying we abandon technical depth. The engineers who thrive in this environment are the ones who understand systems deeply enough to catch when an agent produces something that looks right but isn’t. But the sequence changes. Architecture first, then implementation details. Verification first, then generation. The ability to explain what you want clearly, before the ability to build it yourself.
The Cognitive Debt Problem
There’s a new failure mode that didn’t exist when I wrote the original piece. People are calling it “cognitive debt,” and it’s the accumulated cost of poorly managed AI interactions, context loss, and unreliable agent behavior over time.
Here’s how it shows up. A team adopts AI coding tools aggressively. Individual productivity metrics go up. Lines of code increase. Pull requests ship faster. But organizational delivery stays flat or actually degrades. Nobody fully understands the codebase anymore because significant portions were generated rather than written. Bugs in AI-generated code take longer to diagnose because the developer who “wrote” it doesn’t have the mental model of how it works. Knowledge that used to live in people’s heads now lives in agent context that gets lost between sessions.
This is the productivity paradox I mentioned in 2025, and it’s now well documented. The teams that avoid it are the ones who treat agent-generated code with the same rigor they’d apply to code from a new hire who’s fast but unfamiliar with the codebase. Review it. Understand it. Document why it exists. Build tests that capture the intent, not just the behavior.
Cognitive debt is what happens when you get the “vibe coding” speed without the “agentic engineering” discipline.
The Dissolution, One Year Later
Here’s the thing about layers dissolving: the layer doesn’t disappear. It becomes infrastructure. It becomes assumed. It becomes boring.
Assembly language didn’t vanish. Somewhere right now, someone is writing assembly for a device driver or a kernel module. But for 99% of programmers, it’s invisible. A layer they never touch.
In 2025, I said implementation wouldn’t vanish either. That prediction has held up. Code is still being written. It’s just increasingly written by agents operating under human direction. The layer hasn’t disappeared. It’s dropped below the line of what most practitioners need to think about directly.
What I underestimated was how fast the next layer would form on top. We’re already seeing the early signs of a world where you don’t just tell an agent what to build. You tell a team of agents, each with specialized skills, operating within governance frameworks, drawing from structured context, reporting through quality gates. The human isn’t just a specifier anymore. The human is the architect of the system that specifies.
It’s layers all the way up.
A year ago, I asked: “What are you going to build when the implementation layer dissolves?”
Some people answered that question. A surgeon built a platform. A three-person startup outshipped a team of twelve. Platform engineers became agent architects. Non-developers became builders.
The question for 2026 isn’t about whether the layer is dissolving. That’s settled. The question is whether you’re building the skills to work at the layer above it. Context engineering. Agent orchestration. Governance design. Specification as a discipline rather than an afterthought.
The implementation layer dissolved.
The orchestration layer is forming.
What are you going to build on top of it?
Michael Rishi Forrester is a Principal Training Architect and DevOps Advocate at KodeKloud, founder of The Performant Professionals, and has been preparing tomorrow’s innovators for over 25 years. He has trained more than 1 million engineers and focuses on helping technical professionals adapt through industry transformations.
Connect: @peopleforrester | linkedin.com/in/michaelrishiforrester | michaelrishiforrester.com
Tags: #AI #DevOps #PlatformEngineering #FutureOfWork #ClaudeCode #AgenticEngineering #ContextEngineering #SoftwareDevelopment #TechLeadership
We Have Never Deskilled the Mind Like This Before
We Have Never Deskilled the Mind Like This Before And we have no idea what happens next — History has a pattern for what happens when a new technology makes a skilled trade obsolete. The factory floor ended the artisan economy. Before mass production, a blacksmith spent years as an apprentice, learning the feel of metal under a hammer, when to quench and when to let cool, how to read the color of heated steel. That knowledge lived in hands and in bodies. It transferred slowly, person to person, across decades. It was the kind of skill that couldn’t be written down because most of it wasn’t conscious. It was accumulated.. Then came factories. A stamping machine could produce in seconds what a journeyman spent years learning to make by hand. The craft didn’t disappear overnight. But the value of craft collapsed. The apprentice pipeline dried up. Why spend seven years under a master blacksmith when a factory line would hire you on Monday? The knowledge succession broke. And nobody really noticed until the masters were gone. — ## The Digital Age Reversed It. Briefly. When computing arrived, something unusual happened: skilled craft came back. Software engineering rebuilt the artisan economy for the knowledge age. Junior developers wrote terrible code, got their PRs destroyed in review, debugged things they didn’t understand, got paged at 2am and figured out why production was on fire. That was the apprenticeship. That was how the knowledge transferred. Not through documentation or onboarding decks. Through the doing.. A senior engineer wasn’t just someone who knew more. They were someone with scar tissue. They’d shipped the bad architecture and lived with the consequences. They’d optimized prematurely and spent six months unwinding it. They’d built the system that couldn’t scale and been the one who had to explain why. The judgment that made senior engineers valuable wasn’t knowledge. It was earned intuition, built over years in the implementation layer. The guild model came back. Junior, mid, senior, staff, principal. A legitimate apprenticeship ladder, hidden inside job titles. — ## Now We’re Doing It Again This is not the first time technology has gone after cognitive work. In the 1940s and 50s, “computer” was a job title held by human beings, people whose entire profession was mathematical calculation. Electronic computers eliminated that profession within a generation. Spreadsheets wiped out roughly 400,000 accounting clerk positions after 1980. Word processors dissolved the typing pool. Legal databases automated the citation research that had previously consumed entire workdays for junior associates. Technology has been replacing specific cognitive tasks for decades. This is not new. What is new, and it matters, is the generality of what’s happening now. Every prior wave of cognitive automation targeted a narrow function. Spreadsheets did arithmetic. Legal databases did citation lookup. Word processors handled document formatting. Each tool was a scalpel: precise, domain-specific, bounded. The humans who lost those jobs had somewhere to go, because the automation only reached so far. An LLM doesn’t have one PhD’s worth of pattern recognition. It has the distilled output of essentially every PhD who ever published, every Stack Overflow thread ever written, every codebase ever committed to a public repository. It writes code, drafts documents, analyzes architecture, explains tradeoffs, and reviews pull requests, not in one narrow lane but across all of them at once. This is a different kind of tool. Not a scalpel. Something more like a general solvent. The implementation layer is going the way of the stamping machine. Not gone, but no longer where the value lives. No longer where you invest years of human time. And here’s where the collective mythology kicks in: the engineers who already have deep systems knowledge are not threatened by this. They’re multiplied by it. The business case for hiring senior talent has never been stronger. True enough. But it leaves something out. — ## The Problem Is Where Elite Engineers Come From The multiplier narrative has a data problem. METR, an AI safety research organization, ran a randomized controlled trial in mid-2025 with 16 experienced open-source developers and 246 real-world tasks. They found that AI tools made experienced developers 19% slower, not faster. The developers predicted a 24% speedup. They still believed AI had helped them. They were wrong. A separate field experiment by MIT, Microsoft, and Accenture, covering roughly 1,974 developers, found that junior developers gained 27–39% productivity from AI assistance while senior developers gained only 8–13%. The “100x engineer” is largely an aspiration right now. The truth is context-dependent: seniors use AI better for architectural decisions; juniors benefit more on implementation tasks. Neither group gets a clean multiplier. But set the productivity debate aside. The harder problem runs deeper. Every senior engineer working today built their judgment in the implementation layer. They got there by being junior, then mid, then senior. They wrote the code. They owned the bugs. They did the work that AI is now going to do. The hiring signal is already visible. A Stanford and ADP payroll study published in August 2025, covering millions of workers, found that employment for software developers aged 22–25 had dropped roughly 20% from its late-2022 peak, while workers aged 30 and up in the same AI-exposed fields saw employment grow. A Harvard study that same year, examining 285,000 firms and 62 million workers, found that companies adopting generative AI saw junior employment drop roughly 9–10% within six quarters. SignalFire reported a 50% decline in new role starts by people with less than a year of post-graduate experience at major tech firms between 2019 and 2024. To be fair: post-pandemic correction, rising interest rates, and reduced venture capital are all real factors here. The Stanford study’s authors themselves noted they make no claim that AI is the sole cause. This is a multi-causal decline, not a clean AI story. But some of it is clearly AI. Salesforce announced it would hire no new software engineers in 2025. Shopify’s CEO issued a memo requiring teams to prove AI can’t do a job before asking for headcount. The rationale in each case is the same: why grow the junior headcount when AI can absorb implementation work? The economics are obvious. Each decision makes complete sense for the organization making it. But organizations don’t exist in isolation. Industries do. And the industry is collectively making the same rational decision, which means the industry is collectively eliminating the environment where the next generation of senior engineers gets built. Sociologists call this a tragedy of the commons. Labor economists would recognize it from Harry Braverman’s Labor and Monopoly Capital (1974), the foundational work on deskilling, which documented how industrial capitalism systematically separated the conception of work from its execution, concentrating judgment at the top and eliminating it from the bottom. The Communications of the ACM published a feature in 2025 explicitly applying Braverman’s framework to AI. Cal Newport invoked Braverman by name in January 2026 when warning about AI-driven deskilling. When Anthropic’s own January 2026 Economic Index report used “deskilling” as an analytical category to describe Claude’s effect on occupations, the concept went from academic framing to industry description. — ## The Compounding Problem Nobody Is Naming There’s a term now circulating in research circles: never-skilling. Not deskilling, where you had a skill and lost it. Never-skilling is what happens to the generation that enters the workforce after the training-pipeline tasks are already gone. They don’t lose a skill. They never develop it in the first place, because the work that would have built it is being handled by agents, reviewed by elites who are the last people to have gone through the full crucible. The junior developers who do get hired today aren’t writing code from scratch and fixing their mistakes. They’re reviewing AI output. Catching the errors that surface. Learning, in some degree, to recognize what wrong looks like from the outside. Nobody knows whether that builds the same quality of judgment. Ask any experienced engineer whether reading about concurrency bugs is the same as having caused one. Ask whether reviewing AI-generated infrastructure code teaches you the same things as being the one paged at 3am when that infrastructure fails. Carnegie Mellon researcher Aniket Kittur has warned that AI is producing a loss of basic knowledge among engineers who rely on it without engaging with it. Matt Beane at UC Santa Barbara has spent years studying how AI tools disrupt the apprenticeship dynamics through which expertise actually transfers. Microsoft’s CTO Mark Russinovich and Scott Hanselman published a piece in the Communications of the ACM in February 2026 proposing a new preceptor-based training model for engineering, explicitly because the traditional path is breaking down. They called the knowledge succession concern “a hot topic” in conversations with customers. IBM announced in February 2026 that it would triple its entry-level hiring. AWS CEO Matt Garman called the idea of replacing all juniors with AI “one of the dumbest things I’ve ever heard.” These are the loudest institutional counter-signals to the prevailing tide. Whether they shift anything at the industry level, or whether they’re outliers in a race to the bottom, is an open question. — ## The Question Nobody Is Asking The conversation about AI and the future of work is almost entirely about the workers who exist today. Will senior engineers be replaced? Will mid-level roles survive? What do engineers need to learn to stay relevant? These are real questions. But they’re not the most important question. The most important question is about the workers who don’t exist yet. The twelve-year-old who wants to build things. The computer science student learning to code in a world where AI writes most of the code. The career changer who wants to enter tech but finds the entry-level pipeline largely automated away. What is their path to senior? What replaces the decade of implementation work that built every senior engineer working today? Researchers at Stanford, Carnegie Mellon, and UC Santa Barbara are asking this. Microsoft is asking it. IBM is acting on it. But there is no policy framework, no industry-wide response, no coordinated effort. Just individual companies making the rational short-term call, and a handful of voices warning that the aggregate result of those individual calls will be visible, and costly, in about fifteen years. That’s how knowledge succession failures always work. They’re invisible until the last master retires. — ## What History Actually Teaches Us When electronic computers displaced human computers in the 1940s and 50s, most of the displaced workers were still alive to retrain. When spreadsheets restructured accounting, the profession adapted: roughly 400,000 clerk positions gone, but about 600,000 new accountant roles created. The cognitive work didn’t vanish; it moved up the abstraction ladder. Maybe that pattern holds here. Maybe the junior engineers who can’t find traditional entry-level implementation roles find work as AI supervisors, output reviewers, and systems monitors, accumulating in that work a different but equivalent kind of judgment. Maybe. But the analogy that should worry people more is what happened to physical trades after the industrial revolution. The trades didn’t die. They contracted, specialized, and became luxury. The knowledge largely survived. But the pipeline that had once produced thousands of journeymen per generation now produces dozens. The craft is preserved as heritage, not as infrastructure. If software implementation follows the same path, preserved by the people who love it and handled at scale by agents, the question is whether the pipeline that produces the elites who oversee those agents can sustain itself on craft-scale volume. The industries that navigated these transitions without catastrophic knowledge loss did so because they made deliberate choices. They built institutions. Trade programs, certification bodies, structured apprenticeships. Things that kept the transfer pipeline alive even when the economics of the moment argued against it. They recognized that the market would not solve this on its own, because the market’s time horizon is a quarterly report and the knowledge succession problem plays out over fifteen years. Software engineering as an industry has never had to make this choice before. The technology never moved fast enough to outpace the apprenticeship pipeline. Now it might. And the window for deliberate choices is open right now, while the people who hold the knowledge are still working. — Michael Rishi Forrester has spent 25 years training engineers through platform shifts, from Red Hat to ThoughtWorks to AWS to cloud-native to AI. He’s a Principal Training Architect at KodeKloud, founder of The Performant Professionals, and has watched more “this changes everything” moments than he can count. He’s not sure this one is different. He’s not sure it isn’t. Bluesky | Mastodon | Hachyderm | LinkedIn | YouTube | X — Tags: #FutureOfWork #AI #SoftwareEngineering #TechLeadership #AIStrategy #EngineeringCulture #GenerativeAI
Opinion - We Have Never Deskilled the Mind Like This Before
We Have Never Deskilled the Mind Like This Before
And we have no idea what happens next
History has a pattern for what happens when a new technology makes a skilled trade obsolete.
The factory floor ended the artisan economy. Before mass production, a blacksmith spent years as an apprentice, learning the feel of metal under a hammer, when to quench and when to let cool, how to read the color of heated steel. That knowledge lived in hands and in bodies. It transferred slowly, person to person, across decades. It was the kind of skill that couldn’t be written down because most of it wasn’t conscious. It was accumulated..
Then came factories. A stamping machine could produce in seconds what a journeyman spent years learning to make by hand. The craft didn’t disappear overnight. But the value of craft collapsed. The apprentice pipeline dried up. Why spend seven years under a master blacksmith when a factory line would hire you on Monday?
The knowledge succession broke. And nobody really noticed until the masters were gone.
The Digital Age Reversed It. Briefly.
When computing arrived, something unusual happened: skilled craft came back.
Software engineering rebuilt the artisan economy for the knowledge age. Junior developers wrote terrible code, got their PRs destroyed in review, debugged things they didn’t understand, got paged at 2am and figured out why production was on fire. That was the apprenticeship. That was how the knowledge transferred. Not through documentation or onboarding decks. Through the doing..
A senior engineer wasn’t just someone who knew more. They were someone with scar tissue. They’d shipped the bad architecture and lived with the consequences. They’d optimized prematurely and spent six months unwinding it. They’d built the system that couldn’t scale and been the one who had to explain why. The judgment that made senior engineers valuable wasn’t knowledge. It was earned intuition, built over years in the implementation layer.
The guild model came back. Junior, mid, senior, staff, principal. A legitimate apprenticeship ladder, hidden inside job titles.
Now We’re Doing It Again
This is not the first time technology has gone after cognitive work.
In the 1940s and 50s, “computer” was a job title held by human beings, people whose entire profession was mathematical calculation. Electronic computers eliminated that profession within a generation. Spreadsheets wiped out roughly 400,000 accounting clerk positions after 1980. Word processors dissolved the typing pool. Legal databases automated the citation research that had previously consumed entire workdays for junior associates.
Technology has been replacing specific cognitive tasks for decades. This is not new.
What is new, and it matters, is the generality of what’s happening now.
Every prior wave of cognitive automation targeted a narrow function. Spreadsheets did arithmetic. Legal databases did citation lookup. Word processors handled document formatting. Each tool was a scalpel: precise, domain-specific, bounded. The humans who lost those jobs had somewhere to go, because the automation only reached so far.
An LLM doesn’t have one PhD’s worth of pattern recognition. It has the distilled output of essentially every PhD who ever published, every Stack Overflow thread ever written, every codebase ever committed to a public repository. It writes code, drafts documents, analyzes architecture, explains tradeoffs, and reviews pull requests, not in one narrow lane but across all of them at once. This is a different kind of tool. Not a scalpel. Something more like a general solvent.
The implementation layer is going the way of the stamping machine. Not gone, but no longer where the value lives. No longer where you invest years of human time.
And here’s where the collective mythology kicks in: the engineers who already have deep systems knowledge are not threatened by this. They’re multiplied by it. The business case for hiring senior talent has never been stronger.
True enough. But it leaves something out.
The Problem Is Where Elite Engineers Come From
The multiplier narrative has a data problem.
METR, an AI safety research organization, ran a randomized controlled trial in mid-2025 with 16 experienced open-source developers and 246 real-world tasks. They found that AI tools made experienced developers 19% slower, not faster. The developers predicted a 24% speedup. They still believed AI had helped them. They were wrong. A separate field experiment by MIT, Microsoft, and Accenture, covering roughly 1,974 developers, found that junior developers gained 27–39% productivity from AI assistance while senior developers gained only 8–13%.
The “100x engineer” is largely an aspiration right now. The truth is context-dependent: seniors use AI better for architectural decisions; juniors benefit more on implementation tasks. Neither group gets a clean multiplier.
But set the productivity debate aside. The harder problem runs deeper.
Every senior engineer working today built their judgment in the implementation layer. They got there by being junior, then mid, then senior. They wrote the code. They owned the bugs. They did the work that AI is now going to do.
The hiring signal is already visible. A Stanford and ADP payroll study published in August 2025, covering millions of workers, found that employment for software developers aged 22–25 had dropped roughly 20% from its late-2022 peak, while workers aged 30 and up in the same AI-exposed fields saw employment grow. A Harvard study that same year, examining 285,000 firms and 62 million workers, found that companies adopting generative AI saw junior employment drop roughly 9–10% within six quarters. SignalFire reported a 50% decline in new role starts by people with less than a year of post-graduate experience at major tech firms between 2019 and 2024.
To be fair: post-pandemic correction, rising interest rates, and reduced venture capital are all real factors here. The Stanford study’s authors themselves noted they make no claim that AI is the sole cause. This is a multi-causal decline, not a clean AI story.
But some of it is clearly AI. Salesforce announced it would hire no new software engineers in 2025. Shopify’s CEO issued a memo requiring teams to prove AI can’t do a job before asking for headcount. The rationale in each case is the same: why grow the junior headcount when AI can absorb implementation work?
The economics are obvious. Each decision makes complete sense for the organization making it.
But organizations don’t exist in isolation. Industries do. And the industry is collectively making the same rational decision, which means the industry is collectively eliminating the environment where the next generation of senior engineers gets built.
Sociologists call this a tragedy of the commons. Labor economists would recognize it from Harry Braverman’s Labor and Monopoly Capital (1974), the foundational work on deskilling, which documented how industrial capitalism systematically separated the conception of work from its execution, concentrating judgment at the top and eliminating it from the bottom. The Communications of the ACM published a feature in 2025 explicitly applying Braverman’s framework to AI. Cal Newport invoked Braverman by name in January 2026 when warning about AI-driven deskilling. When Anthropic’s own January 2026 Economic Index report used “deskilling” as an analytical category to describe Claude’s effect on occupations, the concept went from academic framing to industry description.
The Compounding Problem Nobody Is Naming
There’s a term now circulating in research circles: never-skilling.
Not deskilling, where you had a skill and lost it. Never-skilling is what happens to the generation that enters the workforce after the training-pipeline tasks are already gone. They don’t lose a skill. They never develop it in the first place, because the work that would have built it is being handled by agents, reviewed by elites who are the last people to have gone through the full crucible.
The junior developers who do get hired today aren’t writing code from scratch and fixing their mistakes. They’re reviewing AI output. Catching the errors that surface. Learning, in some degree, to recognize what wrong looks like from the outside.
Nobody knows whether that builds the same quality of judgment. Ask any experienced engineer whether reading about concurrency bugs is the same as having caused one. Ask whether reviewing AI-generated infrastructure code teaches you the same things as being the one paged at 3am when that infrastructure fails.
Carnegie Mellon researcher Aniket Kittur has warned that AI is producing a loss of basic knowledge among engineers who rely on it without engaging with it. Matt Beane at UC Santa Barbara has spent years studying how AI tools disrupt the apprenticeship dynamics through which expertise actually transfers. Microsoft’s CTO Mark Russinovich and Scott Hanselman published a piece in the Communications of the ACM in February 2026 proposing a new preceptor-based training model for engineering, explicitly because the traditional path is breaking down. They called the knowledge succession concern “a hot topic” in conversations with customers.
IBM announced in February 2026 that it would triple its entry-level hiring. AWS CEO Matt Garman called the idea of replacing all juniors with AI “one of the dumbest things I’ve ever heard.” These are the loudest institutional counter-signals to the prevailing tide. Whether they shift anything at the industry level, or whether they’re outliers in a race to the bottom, is an open question.
The Question Nobody Is Asking
The conversation about AI and the future of work is almost entirely about the workers who exist today. Will senior engineers be replaced? Will mid-level roles survive? What do engineers need to learn to stay relevant?
These are real questions. But they’re not the most important question.
The most important question is about the workers who don’t exist yet.
The twelve-year-old who wants to build things. The computer science student learning to code in a world where AI writes most of the code. The career changer who wants to enter tech but finds the entry-level pipeline largely automated away. What is their path to senior? What replaces the decade of implementation work that built every senior engineer working today?
Researchers at Stanford, Carnegie Mellon, and UC Santa Barbara are asking this. Microsoft is asking it. IBM is acting on it. But there is no policy framework, no industry-wide response, no coordinated effort. Just individual companies making the rational short-term call, and a handful of voices warning that the aggregate result of those individual calls will be visible, and costly, in about fifteen years.
That’s how knowledge succession failures always work. They’re invisible until the last master retires.
What History Actually Teaches Us
When electronic computers displaced human computers in the 1940s and 50s, most of the displaced workers were still alive to retrain. When spreadsheets restructured accounting, the profession adapted: roughly 400,000 clerk positions gone, but about 600,000 new accountant roles created. The cognitive work didn’t vanish; it moved up the abstraction ladder.
Maybe that pattern holds here. Maybe the junior engineers who can’t find traditional entry-level implementation roles find work as AI supervisors, output reviewers, and systems monitors, accumulating in that work a different but equivalent kind of judgment.
Maybe. But the analogy that should worry people more is what happened to physical trades after the industrial revolution. The trades didn’t die. They contracted, specialized, and became luxury. The knowledge largely survived. But the pipeline that had once produced thousands of journeymen per generation now produces dozens. The craft is preserved as heritage, not as infrastructure.
If software implementation follows the same path, preserved by the people who love it and handled at scale by agents, the question is whether the pipeline that produces the elites who oversee those agents can sustain itself on craft-scale volume.
The industries that navigated these transitions without catastrophic knowledge loss did so because they made deliberate choices. They built institutions. Trade programs, certification bodies, structured apprenticeships. Things that kept the transfer pipeline alive even when the economics of the moment argued against it. They recognized that the market would not solve this on its own, because the market’s time horizon is a quarterly report and the knowledge succession problem plays out over fifteen years.
Software engineering as an industry has never had to make this choice before. The technology never moved fast enough to outpace the apprenticeship pipeline.
Now it might. And the window for deliberate choices is open right now, while the people who hold the knowledge are still working.
Michael Rishi Forrester has spent 25 years training engineers through platform shifts, from Red Hat to ThoughtWorks to AWS to cloud-native to AI. He’s a Principal Training Architect at KodeKloud, founder of The Performant Professionals, and has watched more “this changes everything” moments than he can count. He’s not sure this one is different. He’s not sure it isn’t.
Bluesky | Mastodon | Hachyderm | LinkedIn | YouTube | X
Tags: #FutureOfWork #AI #SoftwareEngineering #TechLeadership #AIStrategy #EngineeringCulture #GenerativeAI
Opinion - We Have Never Deskilled the Mind Like This Before
We Have Never Deskilled the Mind Like This Before
And we have no idea what happens next
History has a pattern for what happens when a new technology makes a skilled trade obsolete.
The factory floor ended the artisan economy. Before mass production, a blacksmith spent years as an apprentice, learning the feel of metal under a hammer, when to quench and when to let cool, how to read the color of heated steel. That knowledge lived in hands and in bodies. It transferred slowly, person to person, across decades. It was the kind of skill that couldn’t be written down because most of it wasn’t conscious. It was accumulated..
Then came factories. A stamping machine could produce in seconds what a journeyman spent years learning to make by hand. The craft didn’t disappear overnight. But the value of craft collapsed. The apprentice pipeline dried up. Why spend seven years under a master blacksmith when a factory line would hire you on Monday?
The knowledge succession broke. And nobody really noticed until the masters were gone.
The Digital Age Reversed It. Briefly.
When computing arrived, something unusual happened: skilled craft came back.
Software engineering rebuilt the artisan economy for the knowledge age. Junior developers wrote terrible code, got their PRs destroyed in review, debugged things they didn’t understand, got paged at 2am and figured out why production was on fire. That was the apprenticeship. That was how the knowledge transferred. Not through documentation or onboarding decks. Through the doing..
A senior engineer wasn’t just someone who knew more. They were someone with scar tissue. They’d shipped the bad architecture and lived with the consequences. They’d optimized prematurely and spent six months unwinding it. They’d built the system that couldn’t scale and been the one who had to explain why. The judgment that made senior engineers valuable wasn’t knowledge. It was earned intuition, built over years in the implementation layer.
The guild model came back. Junior, mid, senior, staff, principal. A legitimate apprenticeship ladder, hidden inside job titles.
Now We’re Doing It Again
This is not the first time technology has gone after cognitive work.
In the 1940s and 50s, “computer” was a job title held by human beings, people whose entire profession was mathematical calculation. Electronic computers eliminated that profession within a generation. Spreadsheets wiped out roughly 400,000 accounting clerk positions after 1980. Word processors dissolved the typing pool. Legal databases automated the citation research that had previously consumed entire workdays for junior associates.
Technology has been replacing specific cognitive tasks for decades. This is not new.
What is new, and it matters, is the generality of what’s happening now.
Every prior wave of cognitive automation targeted a narrow function. Spreadsheets did arithmetic. Legal databases did citation lookup. Word processors handled document formatting. Each tool was a scalpel: precise, domain-specific, bounded. The humans who lost those jobs had somewhere to go, because the automation only reached so far.
An LLM doesn’t have one PhD’s worth of pattern recognition. It has the distilled output of essentially every PhD who ever published, every Stack Overflow thread ever written, every codebase ever committed to a public repository. It writes code, drafts documents, analyzes architecture, explains tradeoffs, and reviews pull requests, not in one narrow lane but across all of them at once. This is a different kind of tool. Not a scalpel. Something more like a general solvent.
The implementation layer is going the way of the stamping machine. Not gone, but no longer where the value lives. No longer where you invest years of human time.
And here’s where the collective mythology kicks in: the engineers who already have deep systems knowledge are not threatened by this. They’re multiplied by it. The business case for hiring senior talent has never been stronger.
True enough. But it leaves something out.
The Problem Is Where Elite Engineers Come From
The multiplier narrative has a data problem.
METR, an AI safety research organization, ran a randomized controlled trial in mid-2025 with 16 experienced open-source developers and 246 real-world tasks. They found that AI tools made experienced developers 19% slower, not faster. The developers predicted a 24% speedup. They still believed AI had helped them. They were wrong. A separate field experiment by MIT, Microsoft, and Accenture, covering roughly 1,974 developers, found that junior developers gained 27–39% productivity from AI assistance while senior developers gained only 8–13%.
The “100x engineer” is largely an aspiration right now. The truth is context-dependent: seniors use AI better for architectural decisions; juniors benefit more on implementation tasks. Neither group gets a clean multiplier.
But set the productivity debate aside. The harder problem runs deeper.
Every senior engineer working today built their judgment in the implementation layer. They got there by being junior, then mid, then senior. They wrote the code. They owned the bugs. They did the work that AI is now going to do.
The hiring signal is already visible. A Stanford and ADP payroll study published in August 2025, covering millions of workers, found that employment for software developers aged 22–25 had dropped roughly 20% from its late-2022 peak, while workers aged 30 and up in the same AI-exposed fields saw employment grow. A Harvard study that same year, examining 285,000 firms and 62 million workers, found that companies adopting generative AI saw junior employment drop roughly 9–10% within six quarters. SignalFire reported a 50% decline in new role starts by people with less than a year of post-graduate experience at major tech firms between 2019 and 2024.
To be fair: post-pandemic correction, rising interest rates, and reduced venture capital are all real factors here. The Stanford study’s authors themselves noted they make no claim that AI is the sole cause. This is a multi-causal decline, not a clean AI story.
But some of it is clearly AI. Salesforce announced it would hire no new software engineers in 2025. Shopify’s CEO issued a memo requiring teams to prove AI can’t do a job before asking for headcount. The rationale in each case is the same: why grow the junior headcount when AI can absorb implementation work?
The economics are obvious. Each decision makes complete sense for the organization making it.
But organizations don’t exist in isolation. Industries do. And the industry is collectively making the same rational decision, which means the industry is collectively eliminating the environment where the next generation of senior engineers gets built.
Sociologists call this a tragedy of the commons. Labor economists would recognize it from Harry Braverman’s Labor and Monopoly Capital (1974), the foundational work on deskilling, which documented how industrial capitalism systematically separated the conception of work from its execution, concentrating judgment at the top and eliminating it from the bottom. The Communications of the ACM published a feature in 2025 explicitly applying Braverman’s framework to AI. Cal Newport invoked Braverman by name in January 2026 when warning about AI-driven deskilling. When Anthropic’s own January 2026 Economic Index report used “deskilling” as an analytical category to describe Claude’s effect on occupations, the concept went from academic framing to industry description.
The Compounding Problem Nobody Is Naming
There’s a term now circulating in research circles: never-skilling.
Not deskilling, where you had a skill and lost it. Never-skilling is what happens to the generation that enters the workforce after the training-pipeline tasks are already gone. They don’t lose a skill. They never develop it in the first place, because the work that would have built it is being handled by agents, reviewed by elites who are the last people to have gone through the full crucible.
The junior developers who do get hired today aren’t writing code from scratch and fixing their mistakes. They’re reviewing AI output. Catching the errors that surface. Learning, in some degree, to recognize what wrong looks like from the outside.
Nobody knows whether that builds the same quality of judgment. Ask any experienced engineer whether reading about concurrency bugs is the same as having caused one. Ask whether reviewing AI-generated infrastructure code teaches you the same things as being the one paged at 3am when that infrastructure fails.
Carnegie Mellon researcher Aniket Kittur has warned that AI is producing a loss of basic knowledge among engineers who rely on it without engaging with it. Matt Beane at UC Santa Barbara has spent years studying how AI tools disrupt the apprenticeship dynamics through which expertise actually transfers. Microsoft’s CTO Mark Russinovich and Scott Hanselman published a piece in the Communications of the ACM in February 2026 proposing a new preceptor-based training model for engineering, explicitly because the traditional path is breaking down. They called the knowledge succession concern “a hot topic” in conversations with customers.
IBM announced in February 2026 that it would triple its entry-level hiring. AWS CEO Matt Garman called the idea of replacing all juniors with AI “one of the dumbest things I’ve ever heard.” These are the loudest institutional counter-signals to the prevailing tide. Whether they shift anything at the industry level, or whether they’re outliers in a race to the bottom, is an open question.
The Question Nobody Is Asking
The conversation about AI and the future of work is almost entirely about the workers who exist today. Will senior engineers be replaced? Will mid-level roles survive? What do engineers need to learn to stay relevant?
These are real questions. But they’re not the most important question.
The most important question is about the workers who don’t exist yet.
The twelve-year-old who wants to build things. The computer science student learning to code in a world where AI writes most of the code. The career changer who wants to enter tech but finds the entry-level pipeline largely automated away. What is their path to senior? What replaces the decade of implementation work that built every senior engineer working today?
Researchers at Stanford, Carnegie Mellon, and UC Santa Barbara are asking this. Microsoft is asking it. IBM is acting on it. But there is no policy framework, no industry-wide response, no coordinated effort. Just individual companies making the rational short-term call, and a handful of voices warning that the aggregate result of those individual calls will be visible, and costly, in about fifteen years.
That’s how knowledge succession failures always work. They’re invisible until the last master retires.
What History Actually Teaches Us
When electronic computers displaced human computers in the 1940s and 50s, most of the displaced workers were still alive to retrain. When spreadsheets restructured accounting, the profession adapted: roughly 400,000 clerk positions gone, but about 600,000 new accountant roles created. The cognitive work didn’t vanish; it moved up the abstraction ladder.
Maybe that pattern holds here. Maybe the junior engineers who can’t find traditional entry-level implementation roles find work as AI supervisors, output reviewers, and systems monitors, accumulating in that work a different but equivalent kind of judgment.
Maybe. But the analogy that should worry people more is what happened to physical trades after the industrial revolution. The trades didn’t die. They contracted, specialized, and became luxury. The knowledge largely survived. But the pipeline that had once produced thousands of journeymen per generation now produces dozens. The craft is preserved as heritage, not as infrastructure.
If software implementation follows the same path, preserved by the people who love it and handled at scale by agents, the question is whether the pipeline that produces the elites who oversee those agents can sustain itself on craft-scale volume.
The industries that navigated these transitions without catastrophic knowledge loss did so because they made deliberate choices. They built institutions. Trade programs, certification bodies, structured apprenticeships. Things that kept the transfer pipeline alive even when the economics of the moment argued against it. They recognized that the market would not solve this on its own, because the market’s time horizon is a quarterly report and the knowledge succession problem plays out over fifteen years.
Software engineering as an industry has never had to make this choice before. The technology never moved fast enough to outpace the apprenticeship pipeline.
Now it might. And the window for deliberate choices is open right now, while the people who hold the knowledge are still working.
Michael Rishi Forrester has spent 25 years training engineers through platform shifts, from Red Hat to ThoughtWorks to AWS to cloud-native to AI. He’s a Principal Training Architect at KodeKloud, founder of The Performant Professionals, and has watched more “this changes everything” moments than he can count. He’s not sure this one is different. He’s not sure it isn’t.
Bluesky | Mastodon | Hachyderm | LinkedIn | YouTube | X
Tags: #FutureOfWork #AI #SoftwareEngineering #TechLeadership #AIStrategy #EngineeringCulture #GenerativeAI
Opinion - The Quiet Hiring Freeze Nobody Is Talking About
By Michael Rishi Forrester | March 2026
Two things happened this week that I can’t stop thinking about.
The Bureau of Labor Statistics dropped the February 2026 jobs report. The economy shed 92,000 jobs. Unemployment held at 4.4%. The headlines did what headlines do: political takes, macro hand-wringing, the usual noise.
The same week, Anthropic published a research paper, Labor Market Impacts of AI: A New Measure and Early Evidence by Maxim Massenkoff and Peter McCrory. It’s one of the more honest pieces of AI labor economics I’ve come across, because it doesn’t try to tell you what AI could do. It looks at what AI is actually doing in real workplaces right now.
Read together, these two data points tell a story the main narrative is completely missing.
Not a Firing Wave. A Hiring Freeze.
The Anthropic paper introduces what they call Observed Exposure, which measures the difference between what AI is theoretically capable of and what people are actually using it for in professional contexts today.
That gap is enormous. Computer and math occupations are theoretically 94% exposed to AI displacement. The actual observed coverage sits around 33%. Real deployment is running at roughly one-third of theoretical capability.
That’s not a reason to exhale. It’s a countdown.
The part that got the least attention: young workers aged 22 to 25 are entering AI-exposed occupations at a rate roughly 14% lower than they were in 2022. Companies are not laying off experienced analysts, customer service leads, or junior engineers in mass. They’re quietly not replacing them when they leave, and they’re not hiring the next cohort into those roles.
That’s the pattern. Not a dramatic collapse, just a slow drain.
Entry-level roles aren’t disappearing in a flash. They’re evaporating through attrition. A position opens, leadership pauses and asks whether an AI workflow can absorb it, then decides to wait. Three months later it’s not in the budget. Six months after that, nobody remembers what that person was even doing.
The Person Nobody Is Worried About
The public conversation about AI and jobs is almost entirely wrong about who is most at risk.
The Anthropic research found that the most AI-exposed workers tend to be female, older, more educated, and higher-paid. The ten most exposed occupations include computer programmers, customer service representatives, market research analysts, financial and investment analysts, and software QA testers.
This is not the warehouse worker automation story we’ve been rehearsing for a decade. The person most at risk right now is a knowledge worker in their 40s with a graduate degree doing information-dense work in an office. That’s a completely different economic and social problem than the one most policy conversations are preparing for.
What Actually Worries Me
I’ve spent 25 years watching large workforces try to adapt to major technology shifts. DevOps. Cloud migration. Kubernetes. Each time, there’s a recognizable pattern: the technology arrives faster than the organizational capacity to absorb it, and a cohort of people gets left behind not because they were bad at their jobs but because nobody built the bridge in time.
AI is moving faster than any of those prior shifts. And unlike those prior shifts, it doesn’t just change the tools. It changes what produces value in the first place.
What keeps me up isn’t the workers being displaced today. It’s the 22-year-old who enrolled in a CS program in 2022 because software engineering was the safest career bet, and is now graduating into a market that quietly stopped hiring for entry-level software roles while they were in school. That person has no runway. They haven’t had time to build the tacit knowledge, the judgment, and the pattern recognition that still makes experienced workers valuable.
Dario Amodei has said publicly that AI could eliminate 50% of entry-level white-collar jobs within 1 to 5 years. The Stanford/ADP payroll data already shows a 13% decline in entry-level hiring in AI-exposed occupations since 2022. These aren’t predictions anymore. They’re early data points.
Nobody Has This Figured Out
Something that gets glossed over in almost every AI and workforce conversation: nobody actually knows how to do this yet.
The frontier models are outperforming expectations faster than most enterprise transformation programs can keep up with. Organizations are genuinely trying to build the plane while it’s in the air. The companies claiming they have a mature AI transformation playbook are, in most cases, slightly ahead of their clients on a learning curve that everyone is still climbing.
That isn’t pessimism. It’s just accurate. And acknowledging it is more useful than pretending a clean methodology exists.
What I do know from watching a million engineers work through technology transitions is that the human and organizational side is always the hard part. The resistance isn’t technical. It’s psychological, cultural, and political. It shows up as the subject matter expert who feels threatened, the middle manager whose value was knowing things a language model now knows, the team that agrees AI is important in the all-hands meeting and then quietly changes nothing about how they work.
Those problems don’t have a technical solution. They require honest organizational leadership, genuine investment in people, and the willingness to tell the truth about what’s changing rather than managing perceptions.
The Deployment Gap Is Closing
The most important finding in the Anthropic paper is that gap between theoretical capability and actual deployment. AI is covering about a third of what it theoretically could right now. That gap closes as models improve and adoption spreads.
Organizations have a window because of it. Not a comfortable one, but a real one. The question is whether they use that time to build genuine AI-ready workforces, not checkbox training programs or a two-hour intro to ChatGPT, but real capability development that helps people understand how to work alongside these systems, what judgment still belongs to humans, and how to stay valuable as the automation frontier moves.
The young worker hiring signal is where I think the real crisis is forming. If companies quietly stop backfilling entry-level roles, we end up with a generation that never got the foundational reps. In five years there won’t be a shortage of AI tools. There will be a shortage of mid-career practitioners who have the contextual judgment to use those tools well, because we never built the pipeline that creates them.
What I’m Watching
Mass displacement hasn’t happened yet, and the Anthropic researchers are careful to say the early entry-level hiring signal is barely statistically significant with alternative explanations still on the table.
But the underlying trends are directional. Long-term unemployment is up 27% year-over-year. The information sector is contracting steadily. 330,000 federal knowledge workers are entering a job market already slowing in knowledge-role hiring. Entry-level positions in AI-exposed fields are quietly shrinking for workers in their early 20s.
Put that next to a technology currently deployed at one-third of its theoretical ceiling and still accelerating, and the shape of what’s coming starts to come into focus.
Organizations that wait until the signal is impossible to ignore will find the runway is a lot shorter than it looks today.
P.S. If you are not reading KP Reddy who, he and I must have had the same thoughts at the same time… I highly recommend. substack.com/@insights… I found his article after I wrote mine and was shocked about how we came to similar conclusions.
Sources: Anthropic, “Labor Market Impacts of AI: A New Measure and Early Evidence” (Massenkoff & McCrory, March 5, 2026). U.S. Bureau of Labor Statistics, February 2026 Employment Situation (March 6, 2026). KP Reddy, “The Jobs Report Just Told You Something the AI Debate Won’t,” Substack (March 6, 2026).
I Built an ArgoCD MCP Server. Here’s Why It’s Different.
I just published an ArgoCD MCP server, and I want to talk about why I bothered when there’s already an official one from argoproj-labs.
The short version: I wanted guardrails. Real ones.
https://github.com/peopleforrester/mcp-k8s-observability-argocd-server
The Problem with Most MCP Servers
Most MCP servers I’ve seen treat all operations the same. Read an application? Delete an application? Same friction level. That’s fine when you’re experimenting on your laptop. It’s less fine at 3 AM when an LLM agent is helping you debug a production outage and you’re too tired to notice it’s about to delete something important.
The official argoproj-labs server has a binary read-only toggle. That’s it. Either you can do everything, or you can only read. No middle ground.
I wanted something that understood the difference between “show me what’s deployed” and “delete this application from production.”
What I Did Differently
Dry-Run by Default
Every write operation previews changes first. You have to explicitly set dry_run=false to actually apply anything. This isn’t about not trusting the LLM—it’s about not trusting myself at 3 AM.
Three Permission Tiers, Not Two
- Tier 1 (Read): Always allowed, rate-limited
- Tier 2 (Write): Requires
MCP_READ_ONLY=false - Tier 3 (Destructive): Requires BOTH confirmation parameters—
confirm=trueANDconfirm_namematching the target
The last one is important. Deleting an application isn’t just “are you sure?” It’s “type the name of what you’re about to delete.” That extra friction is intentional.
Rate Limiting
Configurable limits on API calls per time window. Default is 100 calls per 60 seconds. This exists because LLMs sometimes get stuck in loops. Without rate limiting, a confused agent can hammer your ArgoCD API hundreds of times in seconds. Ask me how I know.
Audit Logging
Structured JSON logs with correlation IDs. Every operation—reads, writes, blocks, errors—gets logged. Optional file-based audit log if you want a paper trail. When something goes wrong, you want to know exactly what the agent did and when.
Secret Masking
Enabled by default. Sensitive values get redacted in output. The LLM doesn’t need to see your actual credentials to help you debug a sync failure.
Single-Cluster Restriction Mode
Optional setting that restricts operations to the default cluster only. Useful when you want to give an agent access to dev but keep it away from prod entirely.
Agent-Friendly Error Messages
Every blocked operation tells you exactly why it was blocked and what to do about it. “To enable: Set MCP_READ_ONLY=false” instead of a generic “permission denied.” LLMs are better at recovering when you give them actionable information.
The Configuration
Here’s the full set of environment variables:
| Variable | Default | Purpose |
|---|---|---|
MCP_READ_ONLY |
true | Block ALL write operations |
MCP_DISABLE_DESTRUCTIVE |
true | Block delete/prune even if writes enabled |
MCP_SINGLE_CLUSTER |
false | Restrict to default cluster only |
MCP_AUDIT_LOG |
(disabled) | Path to audit log file |
MCP_MASK_SECRETS |
true | Redact sensitive values in output |
MCP_RATE_LIMIT_CALLS |
100 | Max API calls per window |
MCP_RATE_LIMIT_WINDOW |
60 | Window size in seconds |
Notice the defaults. Out of the box, you get a read-only server with secret masking and rate limiting. You have to explicitly opt into more dangerous operations.
What This Actually Looks Like
Let’s say you’re using Claude Desktop with this server and you ask it to delete an application.
Without proper confirmation, the agent gets back:
ConfirmationRequired: Deleting application 'my-app' requires confirmation.
To confirm: Set confirm=true AND confirm_name='my-app'
Impact: This will remove the application and all its resources from the cluster.
The agent has to make a second call with both parameters matching. That’s the friction I wanted.
Why This Matters
MCP servers are giving AI agents direct access to your infrastructure. The convenience is real—having an LLM that can actually see your deployments, check sync status, and trigger operations is genuinely useful.
But convenience without guardrails is how incidents happen.
I built this server with the assumption that the agent will occasionally misunderstand what I want. That I’ll occasionally approve something I shouldn’t have. That at some point, someone will be tired and not paying close attention while an LLM is making changes to production.
The goal isn’t to make it impossible to do dangerous things. The goal is to make dangerous things require explicit, unambiguous intent.
Production systems deserve more friction than your laptop.
The server is available now. If you’re running ArgoCD and want to give AI agents access with actual guardrails, check it out.
Opinion - AI Is Replacing 80% of Coding. These Are the Skills That I Think Will Still Matter... at least for a while longer
AI Is Replacing 80% of Coding. These Are the Skills That Will Still Matter.
AI has replaced roughly 80% of what we traditionally called “coding skills.” It will keep replacing more. A handful of capabilities remain human. I’m not panicking, but I am paying attention.
According to Harness’s 2025 State of AI in Software Engineering report, 72% of organizations have experienced at least one production incident directly caused by AI-generated code. AI writes code faster than any human ever could. It also breaks things faster than any human ever could.
I’m writing more code now than I have in the past twenty years. When I say “writing,” I mean guiding the process, shepherding syntax, reviewing output. The actual generation isn’t me anymore.
This November marks 30 years in infrastructure, operations, DevOps, and platform engineering. Red Hat. ThoughtWorks. AWS. KodeKloud. I’ve watched every “this will replace engineers” wave come and go. Mainframes to client-server. Waterfall to Agile. On-prem to cloud. VMs to Kubernetes. Internal developer platforms to platform engineering.
Each transition killed certain tasks while making others more valuable. AI-assisted coding follows the same pattern. The code is being automated. The engineering is not.
Some lower-level engineering skills are disappearing. Some design decisions that once required years of experience are now handled by AI in seconds. But there are capabilities that AI genuinely cannot replicate. These are worth mastering now because they’re becoming scarcer.
Human Connection
The most critical moments in any engineering organization aren’t technical. They’re human. The production incident where someone needs to make the call to roll back or push forward. The architecture review where senior engineers have incompatible visions. The retrospective where a team needs to acknowledge failure without assigning blame, where you’re either building a culture of safety or a culture of fear.
I’ve trained hundreds of engineers across multiple organizations. The ones who become truly senior aren’t distinguished by their technical knowledge. Technical knowledge can be acquired. They’re distinguished by their ability to build trust, navigate conflict, create psychological safety, and communicate under pressure.
AI can answer technical questions. It cannot sit with a junior developer who just caused their first production incident and help them process the experience without shame. It can’t read the exhaustion or worry in a team standup. It can’t advocate for sustainable pace based on nonverbal cues it will never perceive.
Engineering is a team sport. The human skills are the sport itself.
Legal and Ethical Accountability
AI cannot be sued. You can.
If you blindly accept AI-generated code and it causes a data breach, you’re liable. If AI hallucinates a GPL-licensed snippet into your proprietary codebase, you’re liable. If an AI-generated algorithm introduces bias that harms users, you’re liable. Engineers are the accountability shield between AI capabilities and organizational risk.
I’m not talking about paranoia. AI operates without consequences. Humans operate within systems of professional responsibility, legal liability, and ethical obligation. That’s just reality.
When I review AI-generated code, I’m not just checking for bugs. I’m checking for license compliance, security vulnerabilities, privacy implications, and alignment with documented architecture decisions. The AI may not know we’re in a regulated industry with specific audit requirements. It may not be aware of institutional dependencies that matter to how the codebase actually functions.
Can we provide that context to AI? Yes, and we should. But accountability requires understanding consequences at a strategic level. AI generates outputs. Humans own outcomes.
Strategic Systems Thinking
AI optimizes for today. Good systems designers think about evolutionary architectures. Who maintains 10,000 AI-generated test cases when the schema changes?
Hopefully AI. But just because we can do something fast and repetitively doesn’t mean we should.
I see teams falling into this trap constantly. AI generates tests faster than humans can read them. Teams generate thousands of tests, achieve 95% coverage, declare victory. Six months later, they’re modifying 400 tests because the codebase changed. Will AI handle that maintenance? Probably. But is that approach strategically sound? Is testing being thoughtfully applied, or just blindly applied because we now have the capability?
Strategic thinking asks uncomfortable questions:
- How do we audit code produced faster than it can be reviewed?
- What’s our plan when the model we depend on gets deprecated or changes behavior?
- Who owns the technical debt that AI generates at scale?
- What happens when the engineer who understands the AI-generated codebase leaves?
AI is an incredible force multiplier for producing artifacts. It has no concept of maintaining them. Every line of code, human or AI-generated, is a liability that someone will have to understand, modify, and debug for years to come.
Velocity without sustainability is just faster accumulation of technical debt.
Translating Business Needs
When a stakeholder says “make it faster,” AI starts coding. A human asks: “How much are you willing to pay for that speed?”
That’s what architects do. They translate business needs into technical reality while surfacing the tradeoffs. When someone says they want 100% uptime, the architect asks what that means, what it costs, and what it implies for security and operations. When someone wants more resilience, the architect might respond: “That’s a million-dollar DR plan. Here’s what you get for that investment.”
“Make it faster” could mean any number of things:
- Our competitor launched a faster product and I’m panicking
- One customer complained and they happen to be loud
- I don’t understand why this takes time and I need education, not optimization
- We’re actually willing to spend $500,000 on infrastructure to shave off 200 milliseconds
AI cannot read the room. It can’t notice that the stakeholder’s real concern is job security, not system performance. It can’t recognize when the correct answer is “your current speed is actually fine, here’s the data” rather than immediately jumping to implementation.
Requirements translation is a human skill because it requires understanding human motivations, organizational politics, and the difference between stated preferences and revealed preferences. AI takes the ticket at face value. The engineer investigates what’s actually being asked.
Can we teach AI to do this? Yes. But the subtleties here will likely remain in the human domain for years.
Understanding Legacy Code
AI sees messy code and wants to refactor it. A human engineer knows that messy code has survived for a reason.
That function with 47 parameters and a comment that says “DO NOT TOUCH - see incident #4521”? AI wants to clean it up. The senior engineer knows that function handles an edge case that only appears under specific conditions. Maybe a particular customer in Japan submitting an order at exactly midnight UTC.
Legacy code is an archaeological record. Every production incident, every business pivot, every 3am hotfix that kept the company alive. The mess isn’t incompetence. It’s institutional memory encoded in syntax.
AI-assisted refactors can reintroduce bugs that were fixed a decade ago. The original fix was ugly, and AI optimizes for elegance. But the original developers weren’t writing ugly code. They were writing defensive code against threats the AI has never encountered.
Understanding legacy systems requires humility. It requires asking “why is this here?” before asking “how do I fix this?” AI only knows how to ask the second question. We still need humans for the first.
Architectural Reasoning
AI suggests textbook solutions. It doesn’t know about hidden constraints, regulatory requirements, political landscapes, or someone’s inexplicable preference for Redis over Memcached.
When you ask AI to design a system, it gives you the Stack Overflow consensus answer. It doesn’t know that your CTO has a vendetta against MongoDB from a previous job. It doesn’t know that your compliance team will reject anything storing data outside your home region. It doesn’t know that the “obviously correct” microservices architecture will get blocked because your ops team has three people and they’re already drowning.
Architecture isn’t about knowing the best solution. It’s about knowing the best viable solution. The one that accounts for organizational capacity, team skills, budget constraints, and the political capital required to actually ship it.
I’ve watched AI suggest Kubernetes deployments to teams that can barely manage a single EC2 instance. Technically correct. Organizationally catastrophic.
The architect’s job isn’t to find the optimal solution in a vacuum. It’s to find the optimal solution in your vacuum, with all its dust, debris, and hidden obstacles.
What To Do About It
If you’re an engineer watching AI transform your field, stop competing with AI at code generation. You’ll lose that race, and it’s not a race worth winning.
Instead, invest in the skills that make code generation valuable. Build trust. Understand accountability. Think strategically. Translate business needs. Respect legacy systems. Reason about architecture in context.
The engineers who thrive won’t be the ones who can prompt the best code. They’ll be the ones who can take that code and turn it into reliable, maintainable systems that actually serve human needs.
The code is being automated. The engineering never will be.
Michael Rishi Forrester is a Principal Training Architect at KodeKloud and founder of The Performant Professionals. November 2026 marks 30 years in infrastructure, operations, and DevOps. His focus: preparing tomorrow’s innovators while elevating the average.
How Much Coding Do You ACTUALLY Need in IT? (DevOps, SRE, Platform Engineering)
Do you need to know how to code to work in IT? The answer might surprise you—and it’s not as simple as “yes” or “no.”
Become a Kubestronaut in 2026 (Let's Get Certified)
Kick off 2026 with our first live session of the year on Jan 08, 10:00 PM SGT / 07:30 PM IST: “Become a Kubestronaut in 2026”! Start your certification journey strong and discover the proven roadma…
"DevOps Was A Mistake" - How The Industry Royally Messed It Up
Was DevOps ever really about the tools? In this episode, Michael Forrester, Chris McNabb, and Louis Puster unpack how an industry-changing movement got reduced to a job title and a CI/CD pipeline.
Kubernetes 1.34 Features Explained: What's New? (O' WaW Release)
🔥 Practice Kubernetes 1.34 now: kode.wiki/48XnePK
AWS Step Functions: The tool that organizes your Lambda mess 🧹
AWS Step Functions: The tool that organizes your Lambda mess 🧹
🚀 Kubestronaut #15: Linux Foundation Partnership ft. Clyde Seepersad
🚨 Linux Foundation Cyber Week Sale: tidd.ly/4owCGs3