We Have Never Deskilled the Mind Like This Before
We Have Never Deskilled the Mind Like This Before And we have no idea what happens next — History has a pattern for what happens when a new technology makes a skilled trade obsolete. The factory floor ended the artisan economy. Before mass production, a blacksmith spent years as an apprentice, learning the feel of metal under a hammer, when to quench and when to let cool, how to read the color of heated steel. That knowledge lived in hands and in bodies. It transferred slowly, person to person, across decades. It was the kind of skill that couldn’t be written down because most of it wasn’t conscious. It was accumulated.. Then came factories. A stamping machine could produce in seconds what a journeyman spent years learning to make by hand. The craft didn’t disappear overnight. But the value of craft collapsed. The apprentice pipeline dried up. Why spend seven years under a master blacksmith when a factory line would hire you on Monday? The knowledge succession broke. And nobody really noticed until the masters were gone. — ## The Digital Age Reversed It. Briefly. When computing arrived, something unusual happened: skilled craft came back. Software engineering rebuilt the artisan economy for the knowledge age. Junior developers wrote terrible code, got their PRs destroyed in review, debugged things they didn’t understand, got paged at 2am and figured out why production was on fire. That was the apprenticeship. That was how the knowledge transferred. Not through documentation or onboarding decks. Through the doing.. A senior engineer wasn’t just someone who knew more. They were someone with scar tissue. They’d shipped the bad architecture and lived with the consequences. They’d optimized prematurely and spent six months unwinding it. They’d built the system that couldn’t scale and been the one who had to explain why. The judgment that made senior engineers valuable wasn’t knowledge. It was earned intuition, built over years in the implementation layer. The guild model came back. Junior, mid, senior, staff, principal. A legitimate apprenticeship ladder, hidden inside job titles. — ## Now We’re Doing It Again This is not the first time technology has gone after cognitive work. In the 1940s and 50s, “computer” was a job title held by human beings, people whose entire profession was mathematical calculation. Electronic computers eliminated that profession within a generation. Spreadsheets wiped out roughly 400,000 accounting clerk positions after 1980. Word processors dissolved the typing pool. Legal databases automated the citation research that had previously consumed entire workdays for junior associates. Technology has been replacing specific cognitive tasks for decades. This is not new. What is new, and it matters, is the generality of what’s happening now. Every prior wave of cognitive automation targeted a narrow function. Spreadsheets did arithmetic. Legal databases did citation lookup. Word processors handled document formatting. Each tool was a scalpel: precise, domain-specific, bounded. The humans who lost those jobs had somewhere to go, because the automation only reached so far. An LLM doesn’t have one PhD’s worth of pattern recognition. It has the distilled output of essentially every PhD who ever published, every Stack Overflow thread ever written, every codebase ever committed to a public repository. It writes code, drafts documents, analyzes architecture, explains tradeoffs, and reviews pull requests, not in one narrow lane but across all of them at once. This is a different kind of tool. Not a scalpel. Something more like a general solvent. The implementation layer is going the way of the stamping machine. Not gone, but no longer where the value lives. No longer where you invest years of human time. And here’s where the collective mythology kicks in: the engineers who already have deep systems knowledge are not threatened by this. They’re multiplied by it. The business case for hiring senior talent has never been stronger. True enough. But it leaves something out. — ## The Problem Is Where Elite Engineers Come From The multiplier narrative has a data problem. METR, an AI safety research organization, ran a randomized controlled trial in mid-2025 with 16 experienced open-source developers and 246 real-world tasks. They found that AI tools made experienced developers 19% slower, not faster. The developers predicted a 24% speedup. They still believed AI had helped them. They were wrong. A separate field experiment by MIT, Microsoft, and Accenture, covering roughly 1,974 developers, found that junior developers gained 27–39% productivity from AI assistance while senior developers gained only 8–13%. The “100x engineer” is largely an aspiration right now. The truth is context-dependent: seniors use AI better for architectural decisions; juniors benefit more on implementation tasks. Neither group gets a clean multiplier. But set the productivity debate aside. The harder problem runs deeper. Every senior engineer working today built their judgment in the implementation layer. They got there by being junior, then mid, then senior. They wrote the code. They owned the bugs. They did the work that AI is now going to do. The hiring signal is already visible. A Stanford and ADP payroll study published in August 2025, covering millions of workers, found that employment for software developers aged 22–25 had dropped roughly 20% from its late-2022 peak, while workers aged 30 and up in the same AI-exposed fields saw employment grow. A Harvard study that same year, examining 285,000 firms and 62 million workers, found that companies adopting generative AI saw junior employment drop roughly 9–10% within six quarters. SignalFire reported a 50% decline in new role starts by people with less than a year of post-graduate experience at major tech firms between 2019 and 2024. To be fair: post-pandemic correction, rising interest rates, and reduced venture capital are all real factors here. The Stanford study’s authors themselves noted they make no claim that AI is the sole cause. This is a multi-causal decline, not a clean AI story. But some of it is clearly AI. Salesforce announced it would hire no new software engineers in 2025. Shopify’s CEO issued a memo requiring teams to prove AI can’t do a job before asking for headcount. The rationale in each case is the same: why grow the junior headcount when AI can absorb implementation work? The economics are obvious. Each decision makes complete sense for the organization making it. But organizations don’t exist in isolation. Industries do. And the industry is collectively making the same rational decision, which means the industry is collectively eliminating the environment where the next generation of senior engineers gets built. Sociologists call this a tragedy of the commons. Labor economists would recognize it from Harry Braverman’s Labor and Monopoly Capital (1974), the foundational work on deskilling, which documented how industrial capitalism systematically separated the conception of work from its execution, concentrating judgment at the top and eliminating it from the bottom. The Communications of the ACM published a feature in 2025 explicitly applying Braverman’s framework to AI. Cal Newport invoked Braverman by name in January 2026 when warning about AI-driven deskilling. When Anthropic’s own January 2026 Economic Index report used “deskilling” as an analytical category to describe Claude’s effect on occupations, the concept went from academic framing to industry description. — ## The Compounding Problem Nobody Is Naming There’s a term now circulating in research circles: never-skilling. Not deskilling, where you had a skill and lost it. Never-skilling is what happens to the generation that enters the workforce after the training-pipeline tasks are already gone. They don’t lose a skill. They never develop it in the first place, because the work that would have built it is being handled by agents, reviewed by elites who are the last people to have gone through the full crucible. The junior developers who do get hired today aren’t writing code from scratch and fixing their mistakes. They’re reviewing AI output. Catching the errors that surface. Learning, in some degree, to recognize what wrong looks like from the outside. Nobody knows whether that builds the same quality of judgment. Ask any experienced engineer whether reading about concurrency bugs is the same as having caused one. Ask whether reviewing AI-generated infrastructure code teaches you the same things as being the one paged at 3am when that infrastructure fails. Carnegie Mellon researcher Aniket Kittur has warned that AI is producing a loss of basic knowledge among engineers who rely on it without engaging with it. Matt Beane at UC Santa Barbara has spent years studying how AI tools disrupt the apprenticeship dynamics through which expertise actually transfers. Microsoft’s CTO Mark Russinovich and Scott Hanselman published a piece in the Communications of the ACM in February 2026 proposing a new preceptor-based training model for engineering, explicitly because the traditional path is breaking down. They called the knowledge succession concern “a hot topic” in conversations with customers. IBM announced in February 2026 that it would triple its entry-level hiring. AWS CEO Matt Garman called the idea of replacing all juniors with AI “one of the dumbest things I’ve ever heard.” These are the loudest institutional counter-signals to the prevailing tide. Whether they shift anything at the industry level, or whether they’re outliers in a race to the bottom, is an open question. — ## The Question Nobody Is Asking The conversation about AI and the future of work is almost entirely about the workers who exist today. Will senior engineers be replaced? Will mid-level roles survive? What do engineers need to learn to stay relevant? These are real questions. But they’re not the most important question. The most important question is about the workers who don’t exist yet. The twelve-year-old who wants to build things. The computer science student learning to code in a world where AI writes most of the code. The career changer who wants to enter tech but finds the entry-level pipeline largely automated away. What is their path to senior? What replaces the decade of implementation work that built every senior engineer working today? Researchers at Stanford, Carnegie Mellon, and UC Santa Barbara are asking this. Microsoft is asking it. IBM is acting on it. But there is no policy framework, no industry-wide response, no coordinated effort. Just individual companies making the rational short-term call, and a handful of voices warning that the aggregate result of those individual calls will be visible, and costly, in about fifteen years. That’s how knowledge succession failures always work. They’re invisible until the last master retires. — ## What History Actually Teaches Us When electronic computers displaced human computers in the 1940s and 50s, most of the displaced workers were still alive to retrain. When spreadsheets restructured accounting, the profession adapted: roughly 400,000 clerk positions gone, but about 600,000 new accountant roles created. The cognitive work didn’t vanish; it moved up the abstraction ladder. Maybe that pattern holds here. Maybe the junior engineers who can’t find traditional entry-level implementation roles find work as AI supervisors, output reviewers, and systems monitors, accumulating in that work a different but equivalent kind of judgment. Maybe. But the analogy that should worry people more is what happened to physical trades after the industrial revolution. The trades didn’t die. They contracted, specialized, and became luxury. The knowledge largely survived. But the pipeline that had once produced thousands of journeymen per generation now produces dozens. The craft is preserved as heritage, not as infrastructure. If software implementation follows the same path, preserved by the people who love it and handled at scale by agents, the question is whether the pipeline that produces the elites who oversee those agents can sustain itself on craft-scale volume. The industries that navigated these transitions without catastrophic knowledge loss did so because they made deliberate choices. They built institutions. Trade programs, certification bodies, structured apprenticeships. Things that kept the transfer pipeline alive even when the economics of the moment argued against it. They recognized that the market would not solve this on its own, because the market’s time horizon is a quarterly report and the knowledge succession problem plays out over fifteen years. Software engineering as an industry has never had to make this choice before. The technology never moved fast enough to outpace the apprenticeship pipeline. Now it might. And the window for deliberate choices is open right now, while the people who hold the knowledge are still working. — Michael Rishi Forrester has spent 25 years training engineers through platform shifts, from Red Hat to ThoughtWorks to AWS to cloud-native to AI. He’s a Principal Training Architect at KodeKloud, founder of The Performant Professionals, and has watched more “this changes everything” moments than he can count. He’s not sure this one is different. He’s not sure it isn’t. Bluesky | Mastodon | Hachyderm | LinkedIn | YouTube | X — Tags: #FutureOfWork #AI #SoftwareEngineering #TechLeadership #AIStrategy #EngineeringCulture #GenerativeAI
Opinion - We Have Never Deskilled the Mind Like This Before
We Have Never Deskilled the Mind Like This Before
And we have no idea what happens next
History has a pattern for what happens when a new technology makes a skilled trade obsolete.
The factory floor ended the artisan economy. Before mass production, a blacksmith spent years as an apprentice, learning the feel of metal under a hammer, when to quench and when to let cool, how to read the color of heated steel. That knowledge lived in hands and in bodies. It transferred slowly, person to person, across decades. It was the kind of skill that couldn’t be written down because most of it wasn’t conscious. It was accumulated..
Then came factories. A stamping machine could produce in seconds what a journeyman spent years learning to make by hand. The craft didn’t disappear overnight. But the value of craft collapsed. The apprentice pipeline dried up. Why spend seven years under a master blacksmith when a factory line would hire you on Monday?
The knowledge succession broke. And nobody really noticed until the masters were gone.
The Digital Age Reversed It. Briefly.
When computing arrived, something unusual happened: skilled craft came back.
Software engineering rebuilt the artisan economy for the knowledge age. Junior developers wrote terrible code, got their PRs destroyed in review, debugged things they didn’t understand, got paged at 2am and figured out why production was on fire. That was the apprenticeship. That was how the knowledge transferred. Not through documentation or onboarding decks. Through the doing..
A senior engineer wasn’t just someone who knew more. They were someone with scar tissue. They’d shipped the bad architecture and lived with the consequences. They’d optimized prematurely and spent six months unwinding it. They’d built the system that couldn’t scale and been the one who had to explain why. The judgment that made senior engineers valuable wasn’t knowledge. It was earned intuition, built over years in the implementation layer.
The guild model came back. Junior, mid, senior, staff, principal. A legitimate apprenticeship ladder, hidden inside job titles.
Now We’re Doing It Again
This is not the first time technology has gone after cognitive work.
In the 1940s and 50s, “computer” was a job title held by human beings, people whose entire profession was mathematical calculation. Electronic computers eliminated that profession within a generation. Spreadsheets wiped out roughly 400,000 accounting clerk positions after 1980. Word processors dissolved the typing pool. Legal databases automated the citation research that had previously consumed entire workdays for junior associates.
Technology has been replacing specific cognitive tasks for decades. This is not new.
What is new, and it matters, is the generality of what’s happening now.
Every prior wave of cognitive automation targeted a narrow function. Spreadsheets did arithmetic. Legal databases did citation lookup. Word processors handled document formatting. Each tool was a scalpel: precise, domain-specific, bounded. The humans who lost those jobs had somewhere to go, because the automation only reached so far.
An LLM doesn’t have one PhD’s worth of pattern recognition. It has the distilled output of essentially every PhD who ever published, every Stack Overflow thread ever written, every codebase ever committed to a public repository. It writes code, drafts documents, analyzes architecture, explains tradeoffs, and reviews pull requests, not in one narrow lane but across all of them at once. This is a different kind of tool. Not a scalpel. Something more like a general solvent.
The implementation layer is going the way of the stamping machine. Not gone, but no longer where the value lives. No longer where you invest years of human time.
And here’s where the collective mythology kicks in: the engineers who already have deep systems knowledge are not threatened by this. They’re multiplied by it. The business case for hiring senior talent has never been stronger.
True enough. But it leaves something out.
The Problem Is Where Elite Engineers Come From
The multiplier narrative has a data problem.
METR, an AI safety research organization, ran a randomized controlled trial in mid-2025 with 16 experienced open-source developers and 246 real-world tasks. They found that AI tools made experienced developers 19% slower, not faster. The developers predicted a 24% speedup. They still believed AI had helped them. They were wrong. A separate field experiment by MIT, Microsoft, and Accenture, covering roughly 1,974 developers, found that junior developers gained 27–39% productivity from AI assistance while senior developers gained only 8–13%.
The “100x engineer” is largely an aspiration right now. The truth is context-dependent: seniors use AI better for architectural decisions; juniors benefit more on implementation tasks. Neither group gets a clean multiplier.
But set the productivity debate aside. The harder problem runs deeper.
Every senior engineer working today built their judgment in the implementation layer. They got there by being junior, then mid, then senior. They wrote the code. They owned the bugs. They did the work that AI is now going to do.
The hiring signal is already visible. A Stanford and ADP payroll study published in August 2025, covering millions of workers, found that employment for software developers aged 22–25 had dropped roughly 20% from its late-2022 peak, while workers aged 30 and up in the same AI-exposed fields saw employment grow. A Harvard study that same year, examining 285,000 firms and 62 million workers, found that companies adopting generative AI saw junior employment drop roughly 9–10% within six quarters. SignalFire reported a 50% decline in new role starts by people with less than a year of post-graduate experience at major tech firms between 2019 and 2024.
To be fair: post-pandemic correction, rising interest rates, and reduced venture capital are all real factors here. The Stanford study’s authors themselves noted they make no claim that AI is the sole cause. This is a multi-causal decline, not a clean AI story.
But some of it is clearly AI. Salesforce announced it would hire no new software engineers in 2025. Shopify’s CEO issued a memo requiring teams to prove AI can’t do a job before asking for headcount. The rationale in each case is the same: why grow the junior headcount when AI can absorb implementation work?
The economics are obvious. Each decision makes complete sense for the organization making it.
But organizations don’t exist in isolation. Industries do. And the industry is collectively making the same rational decision, which means the industry is collectively eliminating the environment where the next generation of senior engineers gets built.
Sociologists call this a tragedy of the commons. Labor economists would recognize it from Harry Braverman’s Labor and Monopoly Capital (1974), the foundational work on deskilling, which documented how industrial capitalism systematically separated the conception of work from its execution, concentrating judgment at the top and eliminating it from the bottom. The Communications of the ACM published a feature in 2025 explicitly applying Braverman’s framework to AI. Cal Newport invoked Braverman by name in January 2026 when warning about AI-driven deskilling. When Anthropic’s own January 2026 Economic Index report used “deskilling” as an analytical category to describe Claude’s effect on occupations, the concept went from academic framing to industry description.
The Compounding Problem Nobody Is Naming
There’s a term now circulating in research circles: never-skilling.
Not deskilling, where you had a skill and lost it. Never-skilling is what happens to the generation that enters the workforce after the training-pipeline tasks are already gone. They don’t lose a skill. They never develop it in the first place, because the work that would have built it is being handled by agents, reviewed by elites who are the last people to have gone through the full crucible.
The junior developers who do get hired today aren’t writing code from scratch and fixing their mistakes. They’re reviewing AI output. Catching the errors that surface. Learning, in some degree, to recognize what wrong looks like from the outside.
Nobody knows whether that builds the same quality of judgment. Ask any experienced engineer whether reading about concurrency bugs is the same as having caused one. Ask whether reviewing AI-generated infrastructure code teaches you the same things as being the one paged at 3am when that infrastructure fails.
Carnegie Mellon researcher Aniket Kittur has warned that AI is producing a loss of basic knowledge among engineers who rely on it without engaging with it. Matt Beane at UC Santa Barbara has spent years studying how AI tools disrupt the apprenticeship dynamics through which expertise actually transfers. Microsoft’s CTO Mark Russinovich and Scott Hanselman published a piece in the Communications of the ACM in February 2026 proposing a new preceptor-based training model for engineering, explicitly because the traditional path is breaking down. They called the knowledge succession concern “a hot topic” in conversations with customers.
IBM announced in February 2026 that it would triple its entry-level hiring. AWS CEO Matt Garman called the idea of replacing all juniors with AI “one of the dumbest things I’ve ever heard.” These are the loudest institutional counter-signals to the prevailing tide. Whether they shift anything at the industry level, or whether they’re outliers in a race to the bottom, is an open question.
The Question Nobody Is Asking
The conversation about AI and the future of work is almost entirely about the workers who exist today. Will senior engineers be replaced? Will mid-level roles survive? What do engineers need to learn to stay relevant?
These are real questions. But they’re not the most important question.
The most important question is about the workers who don’t exist yet.
The twelve-year-old who wants to build things. The computer science student learning to code in a world where AI writes most of the code. The career changer who wants to enter tech but finds the entry-level pipeline largely automated away. What is their path to senior? What replaces the decade of implementation work that built every senior engineer working today?
Researchers at Stanford, Carnegie Mellon, and UC Santa Barbara are asking this. Microsoft is asking it. IBM is acting on it. But there is no policy framework, no industry-wide response, no coordinated effort. Just individual companies making the rational short-term call, and a handful of voices warning that the aggregate result of those individual calls will be visible, and costly, in about fifteen years.
That’s how knowledge succession failures always work. They’re invisible until the last master retires.
What History Actually Teaches Us
When electronic computers displaced human computers in the 1940s and 50s, most of the displaced workers were still alive to retrain. When spreadsheets restructured accounting, the profession adapted: roughly 400,000 clerk positions gone, but about 600,000 new accountant roles created. The cognitive work didn’t vanish; it moved up the abstraction ladder.
Maybe that pattern holds here. Maybe the junior engineers who can’t find traditional entry-level implementation roles find work as AI supervisors, output reviewers, and systems monitors, accumulating in that work a different but equivalent kind of judgment.
Maybe. But the analogy that should worry people more is what happened to physical trades after the industrial revolution. The trades didn’t die. They contracted, specialized, and became luxury. The knowledge largely survived. But the pipeline that had once produced thousands of journeymen per generation now produces dozens. The craft is preserved as heritage, not as infrastructure.
If software implementation follows the same path, preserved by the people who love it and handled at scale by agents, the question is whether the pipeline that produces the elites who oversee those agents can sustain itself on craft-scale volume.
The industries that navigated these transitions without catastrophic knowledge loss did so because they made deliberate choices. They built institutions. Trade programs, certification bodies, structured apprenticeships. Things that kept the transfer pipeline alive even when the economics of the moment argued against it. They recognized that the market would not solve this on its own, because the market’s time horizon is a quarterly report and the knowledge succession problem plays out over fifteen years.
Software engineering as an industry has never had to make this choice before. The technology never moved fast enough to outpace the apprenticeship pipeline.
Now it might. And the window for deliberate choices is open right now, while the people who hold the knowledge are still working.
Michael Rishi Forrester has spent 25 years training engineers through platform shifts, from Red Hat to ThoughtWorks to AWS to cloud-native to AI. He’s a Principal Training Architect at KodeKloud, founder of The Performant Professionals, and has watched more “this changes everything” moments than he can count. He’s not sure this one is different. He’s not sure it isn’t.
Bluesky | Mastodon | Hachyderm | LinkedIn | YouTube | X
Tags: #FutureOfWork #AI #SoftwareEngineering #TechLeadership #AIStrategy #EngineeringCulture #GenerativeAI
Opinion - We Have Never Deskilled the Mind Like This Before
We Have Never Deskilled the Mind Like This Before
And we have no idea what happens next
History has a pattern for what happens when a new technology makes a skilled trade obsolete.
The factory floor ended the artisan economy. Before mass production, a blacksmith spent years as an apprentice, learning the feel of metal under a hammer, when to quench and when to let cool, how to read the color of heated steel. That knowledge lived in hands and in bodies. It transferred slowly, person to person, across decades. It was the kind of skill that couldn’t be written down because most of it wasn’t conscious. It was accumulated..
Then came factories. A stamping machine could produce in seconds what a journeyman spent years learning to make by hand. The craft didn’t disappear overnight. But the value of craft collapsed. The apprentice pipeline dried up. Why spend seven years under a master blacksmith when a factory line would hire you on Monday?
The knowledge succession broke. And nobody really noticed until the masters were gone.
The Digital Age Reversed It. Briefly.
When computing arrived, something unusual happened: skilled craft came back.
Software engineering rebuilt the artisan economy for the knowledge age. Junior developers wrote terrible code, got their PRs destroyed in review, debugged things they didn’t understand, got paged at 2am and figured out why production was on fire. That was the apprenticeship. That was how the knowledge transferred. Not through documentation or onboarding decks. Through the doing..
A senior engineer wasn’t just someone who knew more. They were someone with scar tissue. They’d shipped the bad architecture and lived with the consequences. They’d optimized prematurely and spent six months unwinding it. They’d built the system that couldn’t scale and been the one who had to explain why. The judgment that made senior engineers valuable wasn’t knowledge. It was earned intuition, built over years in the implementation layer.
The guild model came back. Junior, mid, senior, staff, principal. A legitimate apprenticeship ladder, hidden inside job titles.
Now We’re Doing It Again
This is not the first time technology has gone after cognitive work.
In the 1940s and 50s, “computer” was a job title held by human beings, people whose entire profession was mathematical calculation. Electronic computers eliminated that profession within a generation. Spreadsheets wiped out roughly 400,000 accounting clerk positions after 1980. Word processors dissolved the typing pool. Legal databases automated the citation research that had previously consumed entire workdays for junior associates.
Technology has been replacing specific cognitive tasks for decades. This is not new.
What is new, and it matters, is the generality of what’s happening now.
Every prior wave of cognitive automation targeted a narrow function. Spreadsheets did arithmetic. Legal databases did citation lookup. Word processors handled document formatting. Each tool was a scalpel: precise, domain-specific, bounded. The humans who lost those jobs had somewhere to go, because the automation only reached so far.
An LLM doesn’t have one PhD’s worth of pattern recognition. It has the distilled output of essentially every PhD who ever published, every Stack Overflow thread ever written, every codebase ever committed to a public repository. It writes code, drafts documents, analyzes architecture, explains tradeoffs, and reviews pull requests, not in one narrow lane but across all of them at once. This is a different kind of tool. Not a scalpel. Something more like a general solvent.
The implementation layer is going the way of the stamping machine. Not gone, but no longer where the value lives. No longer where you invest years of human time.
And here’s where the collective mythology kicks in: the engineers who already have deep systems knowledge are not threatened by this. They’re multiplied by it. The business case for hiring senior talent has never been stronger.
True enough. But it leaves something out.
The Problem Is Where Elite Engineers Come From
The multiplier narrative has a data problem.
METR, an AI safety research organization, ran a randomized controlled trial in mid-2025 with 16 experienced open-source developers and 246 real-world tasks. They found that AI tools made experienced developers 19% slower, not faster. The developers predicted a 24% speedup. They still believed AI had helped them. They were wrong. A separate field experiment by MIT, Microsoft, and Accenture, covering roughly 1,974 developers, found that junior developers gained 27–39% productivity from AI assistance while senior developers gained only 8–13%.
The “100x engineer” is largely an aspiration right now. The truth is context-dependent: seniors use AI better for architectural decisions; juniors benefit more on implementation tasks. Neither group gets a clean multiplier.
But set the productivity debate aside. The harder problem runs deeper.
Every senior engineer working today built their judgment in the implementation layer. They got there by being junior, then mid, then senior. They wrote the code. They owned the bugs. They did the work that AI is now going to do.
The hiring signal is already visible. A Stanford and ADP payroll study published in August 2025, covering millions of workers, found that employment for software developers aged 22–25 had dropped roughly 20% from its late-2022 peak, while workers aged 30 and up in the same AI-exposed fields saw employment grow. A Harvard study that same year, examining 285,000 firms and 62 million workers, found that companies adopting generative AI saw junior employment drop roughly 9–10% within six quarters. SignalFire reported a 50% decline in new role starts by people with less than a year of post-graduate experience at major tech firms between 2019 and 2024.
To be fair: post-pandemic correction, rising interest rates, and reduced venture capital are all real factors here. The Stanford study’s authors themselves noted they make no claim that AI is the sole cause. This is a multi-causal decline, not a clean AI story.
But some of it is clearly AI. Salesforce announced it would hire no new software engineers in 2025. Shopify’s CEO issued a memo requiring teams to prove AI can’t do a job before asking for headcount. The rationale in each case is the same: why grow the junior headcount when AI can absorb implementation work?
The economics are obvious. Each decision makes complete sense for the organization making it.
But organizations don’t exist in isolation. Industries do. And the industry is collectively making the same rational decision, which means the industry is collectively eliminating the environment where the next generation of senior engineers gets built.
Sociologists call this a tragedy of the commons. Labor economists would recognize it from Harry Braverman’s Labor and Monopoly Capital (1974), the foundational work on deskilling, which documented how industrial capitalism systematically separated the conception of work from its execution, concentrating judgment at the top and eliminating it from the bottom. The Communications of the ACM published a feature in 2025 explicitly applying Braverman’s framework to AI. Cal Newport invoked Braverman by name in January 2026 when warning about AI-driven deskilling. When Anthropic’s own January 2026 Economic Index report used “deskilling” as an analytical category to describe Claude’s effect on occupations, the concept went from academic framing to industry description.
The Compounding Problem Nobody Is Naming
There’s a term now circulating in research circles: never-skilling.
Not deskilling, where you had a skill and lost it. Never-skilling is what happens to the generation that enters the workforce after the training-pipeline tasks are already gone. They don’t lose a skill. They never develop it in the first place, because the work that would have built it is being handled by agents, reviewed by elites who are the last people to have gone through the full crucible.
The junior developers who do get hired today aren’t writing code from scratch and fixing their mistakes. They’re reviewing AI output. Catching the errors that surface. Learning, in some degree, to recognize what wrong looks like from the outside.
Nobody knows whether that builds the same quality of judgment. Ask any experienced engineer whether reading about concurrency bugs is the same as having caused one. Ask whether reviewing AI-generated infrastructure code teaches you the same things as being the one paged at 3am when that infrastructure fails.
Carnegie Mellon researcher Aniket Kittur has warned that AI is producing a loss of basic knowledge among engineers who rely on it without engaging with it. Matt Beane at UC Santa Barbara has spent years studying how AI tools disrupt the apprenticeship dynamics through which expertise actually transfers. Microsoft’s CTO Mark Russinovich and Scott Hanselman published a piece in the Communications of the ACM in February 2026 proposing a new preceptor-based training model for engineering, explicitly because the traditional path is breaking down. They called the knowledge succession concern “a hot topic” in conversations with customers.
IBM announced in February 2026 that it would triple its entry-level hiring. AWS CEO Matt Garman called the idea of replacing all juniors with AI “one of the dumbest things I’ve ever heard.” These are the loudest institutional counter-signals to the prevailing tide. Whether they shift anything at the industry level, or whether they’re outliers in a race to the bottom, is an open question.
The Question Nobody Is Asking
The conversation about AI and the future of work is almost entirely about the workers who exist today. Will senior engineers be replaced? Will mid-level roles survive? What do engineers need to learn to stay relevant?
These are real questions. But they’re not the most important question.
The most important question is about the workers who don’t exist yet.
The twelve-year-old who wants to build things. The computer science student learning to code in a world where AI writes most of the code. The career changer who wants to enter tech but finds the entry-level pipeline largely automated away. What is their path to senior? What replaces the decade of implementation work that built every senior engineer working today?
Researchers at Stanford, Carnegie Mellon, and UC Santa Barbara are asking this. Microsoft is asking it. IBM is acting on it. But there is no policy framework, no industry-wide response, no coordinated effort. Just individual companies making the rational short-term call, and a handful of voices warning that the aggregate result of those individual calls will be visible, and costly, in about fifteen years.
That’s how knowledge succession failures always work. They’re invisible until the last master retires.
What History Actually Teaches Us
When electronic computers displaced human computers in the 1940s and 50s, most of the displaced workers were still alive to retrain. When spreadsheets restructured accounting, the profession adapted: roughly 400,000 clerk positions gone, but about 600,000 new accountant roles created. The cognitive work didn’t vanish; it moved up the abstraction ladder.
Maybe that pattern holds here. Maybe the junior engineers who can’t find traditional entry-level implementation roles find work as AI supervisors, output reviewers, and systems monitors, accumulating in that work a different but equivalent kind of judgment.
Maybe. But the analogy that should worry people more is what happened to physical trades after the industrial revolution. The trades didn’t die. They contracted, specialized, and became luxury. The knowledge largely survived. But the pipeline that had once produced thousands of journeymen per generation now produces dozens. The craft is preserved as heritage, not as infrastructure.
If software implementation follows the same path, preserved by the people who love it and handled at scale by agents, the question is whether the pipeline that produces the elites who oversee those agents can sustain itself on craft-scale volume.
The industries that navigated these transitions without catastrophic knowledge loss did so because they made deliberate choices. They built institutions. Trade programs, certification bodies, structured apprenticeships. Things that kept the transfer pipeline alive even when the economics of the moment argued against it. They recognized that the market would not solve this on its own, because the market’s time horizon is a quarterly report and the knowledge succession problem plays out over fifteen years.
Software engineering as an industry has never had to make this choice before. The technology never moved fast enough to outpace the apprenticeship pipeline.
Now it might. And the window for deliberate choices is open right now, while the people who hold the knowledge are still working.
Michael Rishi Forrester has spent 25 years training engineers through platform shifts, from Red Hat to ThoughtWorks to AWS to cloud-native to AI. He’s a Principal Training Architect at KodeKloud, founder of The Performant Professionals, and has watched more “this changes everything” moments than he can count. He’s not sure this one is different. He’s not sure it isn’t.
Bluesky | Mastodon | Hachyderm | LinkedIn | YouTube | X
Tags: #FutureOfWork #AI #SoftwareEngineering #TechLeadership #AIStrategy #EngineeringCulture #GenerativeAI
Opinion - The Quiet Hiring Freeze Nobody Is Talking About
By Michael Rishi Forrester | March 2026
Two things happened this week that I can’t stop thinking about.
The Bureau of Labor Statistics dropped the February 2026 jobs report. The economy shed 92,000 jobs. Unemployment held at 4.4%. The headlines did what headlines do: political takes, macro hand-wringing, the usual noise.
The same week, Anthropic published a research paper, Labor Market Impacts of AI: A New Measure and Early Evidence by Maxim Massenkoff and Peter McCrory. It’s one of the more honest pieces of AI labor economics I’ve come across, because it doesn’t try to tell you what AI could do. It looks at what AI is actually doing in real workplaces right now.
Read together, these two data points tell a story the main narrative is completely missing.
Not a Firing Wave. A Hiring Freeze.
The Anthropic paper introduces what they call Observed Exposure, which measures the difference between what AI is theoretically capable of and what people are actually using it for in professional contexts today.
That gap is enormous. Computer and math occupations are theoretically 94% exposed to AI displacement. The actual observed coverage sits around 33%. Real deployment is running at roughly one-third of theoretical capability.
That’s not a reason to exhale. It’s a countdown.
The part that got the least attention: young workers aged 22 to 25 are entering AI-exposed occupations at a rate roughly 14% lower than they were in 2022. Companies are not laying off experienced analysts, customer service leads, or junior engineers in mass. They’re quietly not replacing them when they leave, and they’re not hiring the next cohort into those roles.
That’s the pattern. Not a dramatic collapse, just a slow drain.
Entry-level roles aren’t disappearing in a flash. They’re evaporating through attrition. A position opens, leadership pauses and asks whether an AI workflow can absorb it, then decides to wait. Three months later it’s not in the budget. Six months after that, nobody remembers what that person was even doing.
The Person Nobody Is Worried About
The public conversation about AI and jobs is almost entirely wrong about who is most at risk.
The Anthropic research found that the most AI-exposed workers tend to be female, older, more educated, and higher-paid. The ten most exposed occupations include computer programmers, customer service representatives, market research analysts, financial and investment analysts, and software QA testers.
This is not the warehouse worker automation story we’ve been rehearsing for a decade. The person most at risk right now is a knowledge worker in their 40s with a graduate degree doing information-dense work in an office. That’s a completely different economic and social problem than the one most policy conversations are preparing for.
What Actually Worries Me
I’ve spent 25 years watching large workforces try to adapt to major technology shifts. DevOps. Cloud migration. Kubernetes. Each time, there’s a recognizable pattern: the technology arrives faster than the organizational capacity to absorb it, and a cohort of people gets left behind not because they were bad at their jobs but because nobody built the bridge in time.
AI is moving faster than any of those prior shifts. And unlike those prior shifts, it doesn’t just change the tools. It changes what produces value in the first place.
What keeps me up isn’t the workers being displaced today. It’s the 22-year-old who enrolled in a CS program in 2022 because software engineering was the safest career bet, and is now graduating into a market that quietly stopped hiring for entry-level software roles while they were in school. That person has no runway. They haven’t had time to build the tacit knowledge, the judgment, and the pattern recognition that still makes experienced workers valuable.
Dario Amodei has said publicly that AI could eliminate 50% of entry-level white-collar jobs within 1 to 5 years. The Stanford/ADP payroll data already shows a 13% decline in entry-level hiring in AI-exposed occupations since 2022. These aren’t predictions anymore. They’re early data points.
Nobody Has This Figured Out
Something that gets glossed over in almost every AI and workforce conversation: nobody actually knows how to do this yet.
The frontier models are outperforming expectations faster than most enterprise transformation programs can keep up with. Organizations are genuinely trying to build the plane while it’s in the air. The companies claiming they have a mature AI transformation playbook are, in most cases, slightly ahead of their clients on a learning curve that everyone is still climbing.
That isn’t pessimism. It’s just accurate. And acknowledging it is more useful than pretending a clean methodology exists.
What I do know from watching a million engineers work through technology transitions is that the human and organizational side is always the hard part. The resistance isn’t technical. It’s psychological, cultural, and political. It shows up as the subject matter expert who feels threatened, the middle manager whose value was knowing things a language model now knows, the team that agrees AI is important in the all-hands meeting and then quietly changes nothing about how they work.
Those problems don’t have a technical solution. They require honest organizational leadership, genuine investment in people, and the willingness to tell the truth about what’s changing rather than managing perceptions.
The Deployment Gap Is Closing
The most important finding in the Anthropic paper is that gap between theoretical capability and actual deployment. AI is covering about a third of what it theoretically could right now. That gap closes as models improve and adoption spreads.
Organizations have a window because of it. Not a comfortable one, but a real one. The question is whether they use that time to build genuine AI-ready workforces, not checkbox training programs or a two-hour intro to ChatGPT, but real capability development that helps people understand how to work alongside these systems, what judgment still belongs to humans, and how to stay valuable as the automation frontier moves.
The young worker hiring signal is where I think the real crisis is forming. If companies quietly stop backfilling entry-level roles, we end up with a generation that never got the foundational reps. In five years there won’t be a shortage of AI tools. There will be a shortage of mid-career practitioners who have the contextual judgment to use those tools well, because we never built the pipeline that creates them.
What I’m Watching
Mass displacement hasn’t happened yet, and the Anthropic researchers are careful to say the early entry-level hiring signal is barely statistically significant with alternative explanations still on the table.
But the underlying trends are directional. Long-term unemployment is up 27% year-over-year. The information sector is contracting steadily. 330,000 federal knowledge workers are entering a job market already slowing in knowledge-role hiring. Entry-level positions in AI-exposed fields are quietly shrinking for workers in their early 20s.
Put that next to a technology currently deployed at one-third of its theoretical ceiling and still accelerating, and the shape of what’s coming starts to come into focus.
Organizations that wait until the signal is impossible to ignore will find the runway is a lot shorter than it looks today.
P.S. If you are not reading KP Reddy who, he and I must have had the same thoughts at the same time… I highly recommend. substack.com/@insights… I found his article after I wrote mine and was shocked about how we came to similar conclusions.
Sources: Anthropic, “Labor Market Impacts of AI: A New Measure and Early Evidence” (Massenkoff & McCrory, March 5, 2026). U.S. Bureau of Labor Statistics, February 2026 Employment Situation (March 6, 2026). KP Reddy, “The Jobs Report Just Told You Something the AI Debate Won’t,” Substack (March 6, 2026).
I Built an ArgoCD MCP Server. Here’s Why It’s Different.
I just published an ArgoCD MCP server, and I want to talk about why I bothered when there’s already an official one from argoproj-labs.
The short version: I wanted guardrails. Real ones.
https://github.com/peopleforrester/mcp-k8s-observability-argocd-server
The Problem with Most MCP Servers
Most MCP servers I’ve seen treat all operations the same. Read an application? Delete an application? Same friction level. That’s fine when you’re experimenting on your laptop. It’s less fine at 3 AM when an LLM agent is helping you debug a production outage and you’re too tired to notice it’s about to delete something important.
The official argoproj-labs server has a binary read-only toggle. That’s it. Either you can do everything, or you can only read. No middle ground.
I wanted something that understood the difference between “show me what’s deployed” and “delete this application from production.”
What I Did Differently
Dry-Run by Default
Every write operation previews changes first. You have to explicitly set dry_run=false to actually apply anything. This isn’t about not trusting the LLM—it’s about not trusting myself at 3 AM.
Three Permission Tiers, Not Two
- Tier 1 (Read): Always allowed, rate-limited
- Tier 2 (Write): Requires
MCP_READ_ONLY=false - Tier 3 (Destructive): Requires BOTH confirmation parameters—
confirm=trueANDconfirm_namematching the target
The last one is important. Deleting an application isn’t just “are you sure?” It’s “type the name of what you’re about to delete.” That extra friction is intentional.
Rate Limiting
Configurable limits on API calls per time window. Default is 100 calls per 60 seconds. This exists because LLMs sometimes get stuck in loops. Without rate limiting, a confused agent can hammer your ArgoCD API hundreds of times in seconds. Ask me how I know.
Audit Logging
Structured JSON logs with correlation IDs. Every operation—reads, writes, blocks, errors—gets logged. Optional file-based audit log if you want a paper trail. When something goes wrong, you want to know exactly what the agent did and when.
Secret Masking
Enabled by default. Sensitive values get redacted in output. The LLM doesn’t need to see your actual credentials to help you debug a sync failure.
Single-Cluster Restriction Mode
Optional setting that restricts operations to the default cluster only. Useful when you want to give an agent access to dev but keep it away from prod entirely.
Agent-Friendly Error Messages
Every blocked operation tells you exactly why it was blocked and what to do about it. “To enable: Set MCP_READ_ONLY=false” instead of a generic “permission denied.” LLMs are better at recovering when you give them actionable information.
The Configuration
Here’s the full set of environment variables:
| Variable | Default | Purpose |
|---|---|---|
MCP_READ_ONLY |
true | Block ALL write operations |
MCP_DISABLE_DESTRUCTIVE |
true | Block delete/prune even if writes enabled |
MCP_SINGLE_CLUSTER |
false | Restrict to default cluster only |
MCP_AUDIT_LOG |
(disabled) | Path to audit log file |
MCP_MASK_SECRETS |
true | Redact sensitive values in output |
MCP_RATE_LIMIT_CALLS |
100 | Max API calls per window |
MCP_RATE_LIMIT_WINDOW |
60 | Window size in seconds |
Notice the defaults. Out of the box, you get a read-only server with secret masking and rate limiting. You have to explicitly opt into more dangerous operations.
What This Actually Looks Like
Let’s say you’re using Claude Desktop with this server and you ask it to delete an application.
Without proper confirmation, the agent gets back:
ConfirmationRequired: Deleting application 'my-app' requires confirmation.
To confirm: Set confirm=true AND confirm_name='my-app'
Impact: This will remove the application and all its resources from the cluster.
The agent has to make a second call with both parameters matching. That’s the friction I wanted.
Why This Matters
MCP servers are giving AI agents direct access to your infrastructure. The convenience is real—having an LLM that can actually see your deployments, check sync status, and trigger operations is genuinely useful.
But convenience without guardrails is how incidents happen.
I built this server with the assumption that the agent will occasionally misunderstand what I want. That I’ll occasionally approve something I shouldn’t have. That at some point, someone will be tired and not paying close attention while an LLM is making changes to production.
The goal isn’t to make it impossible to do dangerous things. The goal is to make dangerous things require explicit, unambiguous intent.
Production systems deserve more friction than your laptop.
The server is available now. If you’re running ArgoCD and want to give AI agents access with actual guardrails, check it out.
Opinion - AI Is Replacing 80% of Coding. These Are the Skills That I Think Will Still Matter... at least for a while longer
AI Is Replacing 80% of Coding. These Are the Skills That Will Still Matter.
AI has replaced roughly 80% of what we traditionally called “coding skills.” It will keep replacing more. A handful of capabilities remain human. I’m not panicking, but I am paying attention.
According to Harness’s 2025 State of AI in Software Engineering report, 72% of organizations have experienced at least one production incident directly caused by AI-generated code. AI writes code faster than any human ever could. It also breaks things faster than any human ever could.
I’m writing more code now than I have in the past twenty years. When I say “writing,” I mean guiding the process, shepherding syntax, reviewing output. The actual generation isn’t me anymore.
This November marks 30 years in infrastructure, operations, DevOps, and platform engineering. Red Hat. ThoughtWorks. AWS. KodeKloud. I’ve watched every “this will replace engineers” wave come and go. Mainframes to client-server. Waterfall to Agile. On-prem to cloud. VMs to Kubernetes. Internal developer platforms to platform engineering.
Each transition killed certain tasks while making others more valuable. AI-assisted coding follows the same pattern. The code is being automated. The engineering is not.
Some lower-level engineering skills are disappearing. Some design decisions that once required years of experience are now handled by AI in seconds. But there are capabilities that AI genuinely cannot replicate. These are worth mastering now because they’re becoming scarcer.
Human Connection
The most critical moments in any engineering organization aren’t technical. They’re human. The production incident where someone needs to make the call to roll back or push forward. The architecture review where senior engineers have incompatible visions. The retrospective where a team needs to acknowledge failure without assigning blame, where you’re either building a culture of safety or a culture of fear.
I’ve trained hundreds of engineers across multiple organizations. The ones who become truly senior aren’t distinguished by their technical knowledge. Technical knowledge can be acquired. They’re distinguished by their ability to build trust, navigate conflict, create psychological safety, and communicate under pressure.
AI can answer technical questions. It cannot sit with a junior developer who just caused their first production incident and help them process the experience without shame. It can’t read the exhaustion or worry in a team standup. It can’t advocate for sustainable pace based on nonverbal cues it will never perceive.
Engineering is a team sport. The human skills are the sport itself.
Legal and Ethical Accountability
AI cannot be sued. You can.
If you blindly accept AI-generated code and it causes a data breach, you’re liable. If AI hallucinates a GPL-licensed snippet into your proprietary codebase, you’re liable. If an AI-generated algorithm introduces bias that harms users, you’re liable. Engineers are the accountability shield between AI capabilities and organizational risk.
I’m not talking about paranoia. AI operates without consequences. Humans operate within systems of professional responsibility, legal liability, and ethical obligation. That’s just reality.
When I review AI-generated code, I’m not just checking for bugs. I’m checking for license compliance, security vulnerabilities, privacy implications, and alignment with documented architecture decisions. The AI may not know we’re in a regulated industry with specific audit requirements. It may not be aware of institutional dependencies that matter to how the codebase actually functions.
Can we provide that context to AI? Yes, and we should. But accountability requires understanding consequences at a strategic level. AI generates outputs. Humans own outcomes.
Strategic Systems Thinking
AI optimizes for today. Good systems designers think about evolutionary architectures. Who maintains 10,000 AI-generated test cases when the schema changes?
Hopefully AI. But just because we can do something fast and repetitively doesn’t mean we should.
I see teams falling into this trap constantly. AI generates tests faster than humans can read them. Teams generate thousands of tests, achieve 95% coverage, declare victory. Six months later, they’re modifying 400 tests because the codebase changed. Will AI handle that maintenance? Probably. But is that approach strategically sound? Is testing being thoughtfully applied, or just blindly applied because we now have the capability?
Strategic thinking asks uncomfortable questions:
- How do we audit code produced faster than it can be reviewed?
- What’s our plan when the model we depend on gets deprecated or changes behavior?
- Who owns the technical debt that AI generates at scale?
- What happens when the engineer who understands the AI-generated codebase leaves?
AI is an incredible force multiplier for producing artifacts. It has no concept of maintaining them. Every line of code, human or AI-generated, is a liability that someone will have to understand, modify, and debug for years to come.
Velocity without sustainability is just faster accumulation of technical debt.
Translating Business Needs
When a stakeholder says “make it faster,” AI starts coding. A human asks: “How much are you willing to pay for that speed?”
That’s what architects do. They translate business needs into technical reality while surfacing the tradeoffs. When someone says they want 100% uptime, the architect asks what that means, what it costs, and what it implies for security and operations. When someone wants more resilience, the architect might respond: “That’s a million-dollar DR plan. Here’s what you get for that investment.”
“Make it faster” could mean any number of things:
- Our competitor launched a faster product and I’m panicking
- One customer complained and they happen to be loud
- I don’t understand why this takes time and I need education, not optimization
- We’re actually willing to spend $500,000 on infrastructure to shave off 200 milliseconds
AI cannot read the room. It can’t notice that the stakeholder’s real concern is job security, not system performance. It can’t recognize when the correct answer is “your current speed is actually fine, here’s the data” rather than immediately jumping to implementation.
Requirements translation is a human skill because it requires understanding human motivations, organizational politics, and the difference between stated preferences and revealed preferences. AI takes the ticket at face value. The engineer investigates what’s actually being asked.
Can we teach AI to do this? Yes. But the subtleties here will likely remain in the human domain for years.
Understanding Legacy Code
AI sees messy code and wants to refactor it. A human engineer knows that messy code has survived for a reason.
That function with 47 parameters and a comment that says “DO NOT TOUCH - see incident #4521”? AI wants to clean it up. The senior engineer knows that function handles an edge case that only appears under specific conditions. Maybe a particular customer in Japan submitting an order at exactly midnight UTC.
Legacy code is an archaeological record. Every production incident, every business pivot, every 3am hotfix that kept the company alive. The mess isn’t incompetence. It’s institutional memory encoded in syntax.
AI-assisted refactors can reintroduce bugs that were fixed a decade ago. The original fix was ugly, and AI optimizes for elegance. But the original developers weren’t writing ugly code. They were writing defensive code against threats the AI has never encountered.
Understanding legacy systems requires humility. It requires asking “why is this here?” before asking “how do I fix this?” AI only knows how to ask the second question. We still need humans for the first.
Architectural Reasoning
AI suggests textbook solutions. It doesn’t know about hidden constraints, regulatory requirements, political landscapes, or someone’s inexplicable preference for Redis over Memcached.
When you ask AI to design a system, it gives you the Stack Overflow consensus answer. It doesn’t know that your CTO has a vendetta against MongoDB from a previous job. It doesn’t know that your compliance team will reject anything storing data outside your home region. It doesn’t know that the “obviously correct” microservices architecture will get blocked because your ops team has three people and they’re already drowning.
Architecture isn’t about knowing the best solution. It’s about knowing the best viable solution. The one that accounts for organizational capacity, team skills, budget constraints, and the political capital required to actually ship it.
I’ve watched AI suggest Kubernetes deployments to teams that can barely manage a single EC2 instance. Technically correct. Organizationally catastrophic.
The architect’s job isn’t to find the optimal solution in a vacuum. It’s to find the optimal solution in your vacuum, with all its dust, debris, and hidden obstacles.
What To Do About It
If you’re an engineer watching AI transform your field, stop competing with AI at code generation. You’ll lose that race, and it’s not a race worth winning.
Instead, invest in the skills that make code generation valuable. Build trust. Understand accountability. Think strategically. Translate business needs. Respect legacy systems. Reason about architecture in context.
The engineers who thrive won’t be the ones who can prompt the best code. They’ll be the ones who can take that code and turn it into reliable, maintainable systems that actually serve human needs.
The code is being automated. The engineering never will be.
Michael Rishi Forrester is a Principal Training Architect at KodeKloud and founder of The Performant Professionals. November 2026 marks 30 years in infrastructure, operations, and DevOps. His focus: preparing tomorrow’s innovators while elevating the average.
How Much Coding Do You ACTUALLY Need in IT? (DevOps, SRE, Platform Engineering)
Do you need to know how to code to work in IT? The answer might surprise you—and it’s not as simple as “yes” or “no.”
Become a Kubestronaut in 2026 (Let's Get Certified)
Kick off 2026 with our first live session of the year on Jan 08, 10:00 PM SGT / 07:30 PM IST: “Become a Kubestronaut in 2026”! Start your certification journey strong and discover the proven roadma…
"DevOps Was A Mistake" - How The Industry Royally Messed It Up
Was DevOps ever really about the tools? In this episode, Michael Forrester, Chris McNabb, and Louis Puster unpack how an industry-changing movement got reduced to a job title and a CI/CD pipeline.
Kubernetes 1.34 Features Explained: What's New? (O' WaW Release)
🔥 Practice Kubernetes 1.34 now: kode.wiki/48XnePK
AWS Step Functions: The tool that organizes your Lambda mess 🧹
AWS Step Functions: The tool that organizes your Lambda mess 🧹
🚀 Kubestronaut #15: Linux Foundation Partnership ft. Clyde Seepersad
🚨 Linux Foundation Cyber Week Sale: tidd.ly/4owCGs3
What Happens When You Click Play on Netflix? (System Design Explained)
Ever wondered what happens between clicking “play” and your show starting? 🤔
TCP or UDP? Every Developer Must Know This
TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) are transport-layer protocols that control how your data moves across networks. Think of TCP as registered mail - it confirms de…
Golden Kubestronaut Session 3 - Platform Engineering & Policys
🚀 Golden Kubestronaut Cohort 1 continues with Session 3!
AWS Neptune: The Database Built for Relationships 🔗
AWS Neptune: The Database Built for Relationships 🔗