By Michael Rishi Forrester | March 2026


Two things happened this week that I can’t stop thinking about.

The Bureau of Labor Statistics dropped the February 2026 jobs report. The economy shed 92,000 jobs. Unemployment held at 4.4%. The headlines did what headlines do: political takes, macro hand-wringing, the usual noise.

The same week, Anthropic published a research paper, Labor Market Impacts of AI: A New Measure and Early Evidence by Maxim Massenkoff and Peter McCrory. It’s one of the more honest pieces of AI labor economics I’ve come across, because it doesn’t try to tell you what AI could do. It looks at what AI is actually doing in real workplaces right now.

Read together, these two data points tell a story the main narrative is completely missing.


Not a Firing Wave. A Hiring Freeze.

The Anthropic paper introduces what they call Observed Exposure, which measures the difference between what AI is theoretically capable of and what people are actually using it for in professional contexts today.

That gap is enormous. Computer and math occupations are theoretically 94% exposed to AI displacement. The actual observed coverage sits around 33%. Real deployment is running at roughly one-third of theoretical capability.

That’s not a reason to exhale. It’s a countdown.

The part that got the least attention: young workers aged 22 to 25 are entering AI-exposed occupations at a rate roughly 14% lower than they were in 2022. Companies are not laying off experienced analysts, customer service leads, or junior engineers in mass. They’re quietly not replacing them when they leave, and they’re not hiring the next cohort into those roles.

That’s the pattern. Not a dramatic collapse, just a slow drain.

Entry-level roles aren’t disappearing in a flash. They’re evaporating through attrition. A position opens, leadership pauses and asks whether an AI workflow can absorb it, then decides to wait. Three months later it’s not in the budget. Six months after that, nobody remembers what that person was even doing.


The Person Nobody Is Worried About

The public conversation about AI and jobs is almost entirely wrong about who is most at risk.

The Anthropic research found that the most AI-exposed workers tend to be female, older, more educated, and higher-paid. The ten most exposed occupations include computer programmers, customer service representatives, market research analysts, financial and investment analysts, and software QA testers.

This is not the warehouse worker automation story we’ve been rehearsing for a decade. The person most at risk right now is a knowledge worker in their 40s with a graduate degree doing information-dense work in an office. That’s a completely different economic and social problem than the one most policy conversations are preparing for.


What Actually Worries Me

I’ve spent 25 years watching large workforces try to adapt to major technology shifts. DevOps. Cloud migration. Kubernetes. Each time, there’s a recognizable pattern: the technology arrives faster than the organizational capacity to absorb it, and a cohort of people gets left behind not because they were bad at their jobs but because nobody built the bridge in time.

AI is moving faster than any of those prior shifts. And unlike those prior shifts, it doesn’t just change the tools. It changes what produces value in the first place.

What keeps me up isn’t the workers being displaced today. It’s the 22-year-old who enrolled in a CS program in 2022 because software engineering was the safest career bet, and is now graduating into a market that quietly stopped hiring for entry-level software roles while they were in school. That person has no runway. They haven’t had time to build the tacit knowledge, the judgment, and the pattern recognition that still makes experienced workers valuable.

Dario Amodei has said publicly that AI could eliminate 50% of entry-level white-collar jobs within 1 to 5 years. The Stanford/ADP payroll data already shows a 13% decline in entry-level hiring in AI-exposed occupations since 2022. These aren’t predictions anymore. They’re early data points.


Nobody Has This Figured Out

Something that gets glossed over in almost every AI and workforce conversation: nobody actually knows how to do this yet.

The frontier models are outperforming expectations faster than most enterprise transformation programs can keep up with. Organizations are genuinely trying to build the plane while it’s in the air. The companies claiming they have a mature AI transformation playbook are, in most cases, slightly ahead of their clients on a learning curve that everyone is still climbing.

That isn’t pessimism. It’s just accurate. And acknowledging it is more useful than pretending a clean methodology exists.

What I do know from watching a million engineers work through technology transitions is that the human and organizational side is always the hard part. The resistance isn’t technical. It’s psychological, cultural, and political. It shows up as the subject matter expert who feels threatened, the middle manager whose value was knowing things a language model now knows, the team that agrees AI is important in the all-hands meeting and then quietly changes nothing about how they work.

Those problems don’t have a technical solution. They require honest organizational leadership, genuine investment in people, and the willingness to tell the truth about what’s changing rather than managing perceptions.


The Deployment Gap Is Closing

The most important finding in the Anthropic paper is that gap between theoretical capability and actual deployment. AI is covering about a third of what it theoretically could right now. That gap closes as models improve and adoption spreads.

Organizations have a window because of it. Not a comfortable one, but a real one. The question is whether they use that time to build genuine AI-ready workforces, not checkbox training programs or a two-hour intro to ChatGPT, but real capability development that helps people understand how to work alongside these systems, what judgment still belongs to humans, and how to stay valuable as the automation frontier moves.

The young worker hiring signal is where I think the real crisis is forming. If companies quietly stop backfilling entry-level roles, we end up with a generation that never got the foundational reps. In five years there won’t be a shortage of AI tools. There will be a shortage of mid-career practitioners who have the contextual judgment to use those tools well, because we never built the pipeline that creates them.


What I’m Watching

Mass displacement hasn’t happened yet, and the Anthropic researchers are careful to say the early entry-level hiring signal is barely statistically significant with alternative explanations still on the table.

But the underlying trends are directional. Long-term unemployment is up 27% year-over-year. The information sector is contracting steadily. 330,000 federal knowledge workers are entering a job market already slowing in knowledge-role hiring. Entry-level positions in AI-exposed fields are quietly shrinking for workers in their early 20s.

Put that next to a technology currently deployed at one-third of its theoretical ceiling and still accelerating, and the shape of what’s coming starts to come into focus.

Organizations that wait until the signal is impossible to ignore will find the runway is a lot shorter than it looks today.

P.S. If you are not reading KP Reddy who, he and I must have had the same thoughts at the same time… I highly recommend. substack.com/@insights… I found his article after I wrote mine and was shocked about how we came to similar conclusions.


Sources: Anthropic, “Labor Market Impacts of AI: A New Measure and Early Evidence” (Massenkoff & McCrory, March 5, 2026). U.S. Bureau of Labor Statistics, February 2026 Employment Situation (March 6, 2026). KP Reddy, “The Jobs Report Just Told You Something the AI Debate Won’t,” Substack (March 6, 2026).