Deep Dive: The Implementation Layer Is Dissolving
One Year In, the Shift Is No Longer a Prediction
March 2026
A year ago, I stood in front of a room full of founders and told them a command-line tool would change everything about how we think about building software. I showed them Claude Code. A few of them were excited. Most were polite. Some were clearly thinking I was overselling it.
Last week, one of those founders texted me. Her three-person startup just shipped a product that would have taken a team of twelve in 2023. She didn’t hire more engineers. She got better at telling agents exactly what to build.
In March 2025, I wrote that the implementation layer was dissolving. That the bottleneck was shifting from writing code to articulating what you actually want. I called it a “specification renaissance.”
I was right about the direction. I was wrong about the speed.
The Priesthood Falls (Again)
The history hasn’t changed, so I’ll keep this brief for anyone who read the original.
In 1957, IBM released FORTRAN. For the first time, programmers could write AREA = 3.14159 * RADIUS ** 2 instead of dozens of machine code instructions. John Backus, the creator, later described the culture he was challenging: a “priesthood of programming” who regarded themselves as “guardians of mysteries far too complex for ordinary mortals.”
Grace Hopper pushed compilers against resistance from people who thought automatic programming was crazy. Colleagues worried it would make programmers obsolete.
They were right to be nervous. They were wrong about the outcome. High-level languages created an explosion in demand for programmers. The skills that made you valuable as an assembly programmer became irrelevant to 99% of the work. New skills became essential.
The priests didn’t disappear. They transformed.
That same pattern is playing out right now. Except this time, we have the receipts.
From “Gradually” to “Suddenly”
In the original piece, I quoted the old line about change happening “gradually, then suddenly.” I told those founders we were in the “gradually” phase.
We’re not anymore.
In 2025, GitHub saw 43 million pull requests merged per month, a 23% increase year over year. Annual commits pushed jumped 25% to nearly one billion. Roughly 85% of developers now use AI coding tools on a regular basis, and around 46% of all code written by active developers comes from AI. Those aren’t projections. Those are actuals.
Stripe’s internal AI agents produce over 1,000 merged pull requests every week. TELUS saved more than 500,000 hours using AI-driven development across 13,000 internal solutions. Zapier hit 89% AI adoption across their entire organization. This isn’t a pilot program or a handful of early adopters running experiments. This is how software gets built now at companies that are paying attention.
A year ago I said the teams achieving 10x productivity gains were still outliers. Some of them still are. But 2x to 5x gains are now common among engineers who have figured out how to work with agents properly. The key word there is “properly,” and I’ll come back to that.
The Vibe Coding Arc
Something happened this past year that I didn’t predict, mostly because the term didn’t exist yet when I wrote the original piece.
In February 2025, Andrej Karpathy posted a tweet describing what he called “vibe coding.” You give in to the vibes, embrace the output, forget that the code even exists. It was a throwaway thought. A shower-time tweet. It became the dominant frame for AI-assisted development for an entire year.
Then people tried to ship vibe-coded software to production. Security vulnerabilities. Unmaintainable architecture. Accumulated technical debt from code nobody reviewed because nobody could read it. The industry learned a hard lesson: AI can generate code faster than any human, but speed without direction produces expensive garbage at scale.
Exactly one year after his original tweet, Karpathy posted again. This time he retired the term. In its place, he proposed “agentic engineering,” which he defined as the discipline where you’re not writing the code directly 99% of the time, you’re orchestrating agents who do, and acting as oversight. The word “engineering” was deliberate. Art, science, and professional skill. Something you can learn and get better at.
This arc, from vibe coding to agentic engineering, is exactly the transition I described in 2025 as the shift from implementation to articulation. The industry just needed a year of painful experience to validate it.
The Bottleneck Moved (Again)
Here’s what I said last year: the bottleneck shifted from implementation to articulation. The person who can clearly specify what they want now has more leverage than an entire team of developers who are fuzzy on the requirements.
That’s still true. But the bottleneck has already moved again, and it moved faster than I expected.
We now have three distinct bottlenecks that surface depending on the maturity of the team:
Specification is still the first wall most people hit. If you can’t describe what you want built with precision, agents will build you something that compiles, passes obvious tests, and solves the wrong problem. A Google Cloud PM recently shared a story about an intern who accomplished more in an afternoon with Claude Code than a senior engineer could do in three days. The difference wasn’t the tool. The intern was better at breaking down the problem into clear, verifiable subtasks.
Context is the wall that hits next. Prompt engineering was the buzzword in 2023 and 2024. In 2026, it’s been overtaken by context engineering. Anthropic published guidance on this, defining it as the discipline of curating and maintaining the optimal set of information during inference. ThoughtWorks calls it the biggest shift in developer experience this year. The idea is straightforward: the quality of an agent’s output depends less on how cleverly you phrase your request and more on what information the agent has access to when it works. Your codebase conventions, your architectural decisions, your team’s patterns, your domain constraints. If those things aren’t structured in a way agents can consume, it doesn’t matter how good the model gets.
Governance is the wall that hits at scale. When a single agent can produce a thousand PRs a week, you need automated security scanning, audit trails, quality gates, and clear ownership models. Manual code review can’t keep pace with agent-generated output. The organizations figuring this out are the ones who will scale. The ones who aren’t are building a mountain of technical debt they can’t see yet.
Context Engineering Is the New Skill
In the original article, I listed “context curation” as a becoming-essential skill. I buried it in a bullet point. That was a mistake. It deserved its own section, because it turned out to be one of the defining skill categories of 2026.
Here’s the short version. Large language models don’t learn your preferences through osmosis. They know exactly what’s in their context window and nothing else. Early on, people treated that as a limitation. Smart teams now treat it as a design surface.
Anthropic open-sourced the Agent Skills specification in late 2025. It’s been adopted by Claude Code, GitHub Copilot, Cursor, and OpenAI Codex. The idea is that instead of stuffing everything into a system prompt and hoping the agent pays attention, you structure your expertise as modular skills that get loaded on demand. Write a skill once, use it everywhere. The agent loads only what it needs for the current task, which keeps the context lean and the output consistent.
This matters because of something researchers call “context rot.” Million-token context windows sound impressive, but the transformer architecture forces every token to attend to every other token. At 4,000 tokens, that works great. At 400,000 tokens, performance degrades in predictable ways. Information in the middle gets lost. Attention spreads thin. The agent’s effective intelligence drops.
Context engineering solves this by being deliberate about what goes in. Your project’s CLAUDE.md file, your architectural rules, your team conventions, your skill definitions. These are the new artifacts that determine output quality. For platform engineers, this should feel familiar. We’ve spent careers building abstractions, writing documentation, creating self-service interfaces. Context engineering is that same work, aimed at a new consumer: the AI agent.
What Claude Code Actually Looks Like in 2026
A year ago, I described Claude Code as “an autonomous agent that can understand a problem description, break it into components, implement solutions across multiple files, run tests, fix bugs, and iterate.” That description is still technically accurate in the way that describing a car as “a box with wheels that moves” is technically accurate.
Here’s what the tool actually does now.
Claude Code runs agent teams. Not one agent working through a task sequentially, but multiple specialized agents working in parallel. You can have one agent analyzing code for reuse opportunities, another reviewing for quality issues, and a third checking for efficiency problems, all running simultaneously and reporting back. This isn’t a beta feature anymore. It shipped.
It has voice mode. You can speak your instructions instead of typing them. This sounds like a convenience feature until you realize how it changes the workflow. Describing architecture out loud, talking through constraints, explaining the problem context verbally while the agent works. Coding sessions now look less like typing at a terminal and more like directing a team.
It remembers things. Not in the way early AI tools pretended to remember by echoing back your last message, but through actual persistent memory across sessions. Architectural preferences, coding patterns, project context. The agent picks up where it left off.
It has a full 1M token context window on Opus 4.6. Skills that load on demand. Native IDE extensions for VS Code and JetBrains. Background agents that run while you do other work. A /loop command that executes recurring tasks on a schedule.
The tool I showed those founders in 2025 was a preview. What exists now is a development environment where the human’s job is to specify, direct, and verify while the agents handle execution.
Platform Engineers as Agent Architects
In the original piece, I called platform engineers “intent orchestrators.” I still like that framing, but it needs updating.
The reality in 2026 is that platform teams are becoming the people who design and maintain the systems that agents operate within. Not just infrastructure provisioning and CI/CD pipelines, but the guardrails, the quality gates, the context structures, and the governance frameworks that make agent-driven development safe at scale.
Think about it this way. When a single developer can deploy five parallel agents that each generate, test, and refine code autonomously, somebody needs to make sure those agents are working within the right constraints. Somebody needs to define what “safe self-service” looks like when the self-service consumer is an AI agent rather than a human developer. Somebody needs to build the automated security scanning that catches vulnerabilities at agent speed rather than human speed.
That somebody is the platform team.
The role isn’t less technical. If anything, it’s more technical. You need to understand how context windows work to design effective skills. You need to understand agent orchestration patterns to build useful development workflows. You need to understand security at a systems level to build governance that scales. This is the natural evolution of what platform engineering has always been, abstracting complexity and providing guardrails, but the consumer has changed.
The New Practitioners
Here’s something I didn’t cover in the original piece, and I should have.
The practitioners building with these tools are no longer limited to professional software engineers. A thoracic surgeon publicly shared that he learned to code through Claude Code, ran 67 autonomous agent sessions, and shipped a full-stack platform with a blog, analytics, and multi-agent orchestration. Not a toy app. A production system.
Product managers are building working prototypes to validate ideas before writing a single spec document. Researchers are building data analysis pipelines and frontend visualizations without waiting on engineering teams. Security teams are using agents to analyze unfamiliar codebases. Domain experts with deep knowledge of their field but no formal computer science training are building real tools that solve real problems.
This is the FORTRAN parallel playing out in real time. When compilers eliminated the need to think in machine language, the number of people who could build software exploded. The same thing is happening now. The barrier to entry hasn’t dropped to zero, you still need clear thinking, domain knowledge, and the discipline to verify outputs, but it has dropped far enough that the population of builders is expanding rapidly.
The bottleneck isn’t “can you code?” anymore. It’s “do you have taste?” Do you understand the problem well enough to know when the agent’s output is right and when it’s subtly wrong? Do you know what good looks like in your domain? Can you architect a solution before you try to build it?
The New Skill Stack (Updated)
Here’s the revised version of what I listed a year ago, adjusted for what we’ve actually learned.
Still essential, now proven: Systems thinking and architectural reasoning remain the foundation. If you can’t think about how components interact, agents will build you a collection of parts that don’t work together. Understanding tradeoffs between consistency and availability, speed and safety, simplicity and flexibility. These haven’t diminished in value at all. Domain expertise, meaning deep knowledge of the actual problem space, is more valuable than it’s ever been.
Now essential, no longer emerging: Clear written specification. Your specs are your code now, and the quality of your writing directly determines the quality of your output. Context engineering across rules, skills, memory, and project documentation. Multi-agent orchestration, understanding when to run agents in parallel versus sequentially, when to use specialized models for specific tasks, and how to coordinate handoffs between agents. Verification thinking, knowing how to validate that what the agent built actually solves your problem, not just that it compiles and passes tests.
Newly essential, barely discussed a year ago: Governance design. Building quality gates, security scanning, and audit trails that work at agent speed. Agent Skills authoring, structuring your team’s expertise as portable, reusable skill definitions that any agent can consume. Cognitive load management, being deliberate about what goes into context and what stays out, because more information doesn’t mean better results.
Diminished further than expected: Memorizing syntax and APIs was already declining. It’s now genuinely irrelevant for most work. Writing boilerplate code, generating test scaffolds, creating standard configurations. These are fully delegated tasks now. Manual code review for routine issues. Agents catch obvious problems faster than humans. The interesting review work, the architectural stuff, the subtle design implications, that’s still human territory.
What This Means for Education
I build training courses. I’ve been doing this for over twenty-five years. I’ve trained more than a million engineers through KodeKloud. And the model I’ve been using is fundamentally changing.
The old approach was: learn the syntax, practice the mechanics, build muscle memory, eventually understand the patterns. We front-loaded implementation skills because implementation was the bottleneck.
That model still produces people who can write code. It doesn’t produce people who can direct agents effectively. The gap between writing code yourself and specifying what an agent should build is real, and our training pipelines haven’t caught up.
The new model needs to front-load specification, context design, and verification. Teach people to think architecturally before they think syntactically. Teach them to write CLAUDE.md files and Agent Skills and structured constraints. Teach them to evaluate AI output critically, not just for correctness but for maintainability, security, and alignment with actual business requirements.
I’m not saying we abandon technical depth. The engineers who thrive in this environment are the ones who understand systems deeply enough to catch when an agent produces something that looks right but isn’t. But the sequence changes. Architecture first, then implementation details. Verification first, then generation. The ability to explain what you want clearly, before the ability to build it yourself.
The Cognitive Debt Problem
There’s a new failure mode that didn’t exist when I wrote the original piece. People are calling it “cognitive debt,” and it’s the accumulated cost of poorly managed AI interactions, context loss, and unreliable agent behavior over time.
Here’s how it shows up. A team adopts AI coding tools aggressively. Individual productivity metrics go up. Lines of code increase. Pull requests ship faster. But organizational delivery stays flat or actually degrades. Nobody fully understands the codebase anymore because significant portions were generated rather than written. Bugs in AI-generated code take longer to diagnose because the developer who “wrote” it doesn’t have the mental model of how it works. Knowledge that used to live in people’s heads now lives in agent context that gets lost between sessions.
This is the productivity paradox I mentioned in 2025, and it’s now well documented. The teams that avoid it are the ones who treat agent-generated code with the same rigor they’d apply to code from a new hire who’s fast but unfamiliar with the codebase. Review it. Understand it. Document why it exists. Build tests that capture the intent, not just the behavior.
Cognitive debt is what happens when you get the “vibe coding” speed without the “agentic engineering” discipline.
The Dissolution, One Year Later
Here’s the thing about layers dissolving: the layer doesn’t disappear. It becomes infrastructure. It becomes assumed. It becomes boring.
Assembly language didn’t vanish. Somewhere right now, someone is writing assembly for a device driver or a kernel module. But for 99% of programmers, it’s invisible. A layer they never touch.
In 2025, I said implementation wouldn’t vanish either. That prediction has held up. Code is still being written. It’s just increasingly written by agents operating under human direction. The layer hasn’t disappeared. It’s dropped below the line of what most practitioners need to think about directly.
What I underestimated was how fast the next layer would form on top. We’re already seeing the early signs of a world where you don’t just tell an agent what to build. You tell a team of agents, each with specialized skills, operating within governance frameworks, drawing from structured context, reporting through quality gates. The human isn’t just a specifier anymore. The human is the architect of the system that specifies.
It’s layers all the way up.
A year ago, I asked: “What are you going to build when the implementation layer dissolves?”
Some people answered that question. A surgeon built a platform. A three-person startup outshipped a team of twelve. Platform engineers became agent architects. Non-developers became builders.
The question for 2026 isn’t about whether the layer is dissolving. That’s settled. The question is whether you’re building the skills to work at the layer above it. Context engineering. Agent orchestration. Governance design. Specification as a discipline rather than an afterthought.
The implementation layer dissolved.
The orchestration layer is forming.
What are you going to build on top of it?
Michael Rishi Forrester is a Principal Training Architect and DevOps Advocate at KodeKloud, founder of The Performant Professionals, and has been preparing tomorrow’s innovators for over 25 years. He has trained more than 1 million engineers and focuses on helping technical professionals adapt through industry transformations.
Connect: @peopleforrester | linkedin.com/in/michaelrishiforrester | michaelrishiforrester.com
Tags: #AI #DevOps #PlatformEngineering #FutureOfWork #ClaudeCode #AgenticEngineering #ContextEngineering #SoftwareDevelopment #TechLeadership