While the headlines scream of mass layoffs, the data suggests a more subtle and structural shift: the ladder is being pulled up from the bottom.
Source: https://www.anthropic.com/research/labor-market-impacts (Figure 2)
The Ghost in the Unemployment Statistics
“AI is coming for your job.”
It’s the refrain of every tech influencer and doom-scrolling headline. Two days ago, a report dropped that seemed to confirm the worst. Anthropic, one of the leading labs behind the very tech in question, released Labor Market Impacts of AI: A New Measure and Early Evidence.
The reaction was instantaneous.
- “74.5% of programmers at risk!”
- “The end of entry-level work!”
But as someone who spends their days parsing causal inference and economic data, I find the actual numbers tell a story that is far more complex—and arguably more concerning—than a simple wave of pink slips.
The real danger isn’t that we’ll be fired tomorrow. The danger is that the door we used to enter the industry is slowly being locked from the inside.
Moving Beyond “What AI Can Do”
Most studies on AI and labor follow a predictable script. They take a database of job tasks (like O*NET), ask a panel of experts if an LLM can do those tasks, and then calculate a “potential exposure” score. The 2023 study by Eloundou et al. (OpenAI) is the gold standard here, measuring if LLMs could speed up tasks by at least 50%.
But there is a massive chasm between “can do” and “is doing.” A pharmacist’s task of cross-referencing drug interactions is technically within a model’s capabilities, but legal hurdles, validation protocols, and software integration mean Claude isn’t actually replacing your local pharmacist yet.
Anthropic’s new report introduces a metric they call Observed Exposure. It combines three layers:
- O*NET Task Lists: What does the job actually entail?
- Theoretical Exposure: The Eloundou et al. scores (β).
- Real-world Usage: Actual telemetry data from Claude users.
They even applied a weighting system: “Augmentation” (human + AI) gets a half-weight, while “Full Automation” (API-driven, no human) gets full weight. The result? A sobering reality check. While 94% of tasks in computer and mathematical occupations are theoretically automatable, the observed coverage is only 33%.
We are currently seeing a 3x gap between the hype of what AI could do and the reality of what it is actually doing in production environments.
The 74.5% Mirage
The figure that caught everyone’s eye was the “74.5% exposure” for computer programmers. But let’s look at the plumbing of that number. This doesn’t mean 74.5% of programmers are losing their jobs. It means that for 74.5% of the tasks within that job category, we see both a theoretical ability for AI to perform them and actual evidence of users using Claude to automate them.
There’s a clear selection bias here: the report only looks at Claude data. It misses ChatGPT, Gemini, and GitHub Copilot. If they included those, the “Observed Exposure” would likely be higher. Conversely, Claude’s user base is heavily skewed toward developers, which might inflate the numbers for that specific niche.
But the most important takeaway from the Anthropic team is this: They found no evidence of mass unemployment. Using a Difference-in-Differences (DID) analysis on U.S. Current Population Survey (CPS) data, they compared the top 25% AI-exposed jobs with the 0% exposed group. The change in the unemployment gap since the launch of ChatGPT? A statistically insignificant +0.20 percentage points.
If you’re looking for a smoking gun of mass layoffs, the data simply isn’t there.
However, there is a technical caveat. The researchers admitted that their analysis framework has a “minimum detectable effect size” of about 1 percentage point. This means that a 0.5% shift in unemployment—which would still represent hundreds of thousands of people—would be invisible to this specific statistical lens. We aren’t seeing a fire, but our smoke detector might just have a high threshold.
Canaries in the Coal Mine: The Barrier to Entry
The real signal isn’t in who is leaving the workforce; it’s in who isn’t being allowed to enter. When we look at the “Job Start Rate” for workers aged 22–25, the picture turns grim.
In high-AI-exposure occupations, new entries for young workers began to drop sharply in 2024—a 14% decrease compared to 2022 levels. Meanwhile, for those over 25, the hiring rate remained stable. This aligns perfectly with a 2025 study by Erik Brynjolfsson and his team at the Stanford Digital Economy Lab, who analyzed ADP payroll data. They found that in AI-exposed roles, employment for 22–25-year-olds dropped between 6% and 16%, while it actually increased slightly for those over 35.
Why is this happening? It’s a classic GTM (Go-To-Market) strategy shift applied to labor. When a market changes, you don’t see “churn” (layoffs) first; you see a “hiring freeze” (lack of new acquisition).
For a company, firing a senior dev is expensive and risky. But if that senior dev can now use AI to do the work of two juniors, the company simply stops hiring the juniors. The friction is zero. The “Career Ladder” is losing its first rung.
The Structural Silence
We have to be careful about causality. Is this all AI? Or is it the result of high interest rates and the “tech winter” of 2023-2024? Both reports acknowledge this ambiguity. But the convergence of two different data sources—CPS surveys and ADP payrolls—suggests this isn’t just noise.
I find this “quiet restructuring” more dangerous than a sudden crash. A crash triggers a policy response. A quiet narrowing of entry-level opportunities just leads to a “lost generation” of talent that never gets the chance to become the seniors of tomorrow.
When AI labs like Anthropic publish these reports, we should listen, but we should also look at the framing. Proclaiming “no mass unemployment” is a strategic win for an AI company. It blunts the edge of regulation. But by pointing to the “entry-level gap” in their concluding sentences, they are signaling that they see the structural shift happening in the shadows.
What We Should Be Monitoring
We need to stop obsessing over the unemployment rate and start looking at “Labor Market Entry Pathways.” If you are a student or a junior professional, “studying harder” or “taking an AI course” isn’t a structural solution to a structural problem.
The problem isn’t your lack of skill; it’s the changing math of the firm. If the “cost of entry-level training” is perceived as higher than “AI-augmented senior output,” the entry-level market will continue to hollow out.
The true “end of the junior” isn’t a headline; it’s a quiet, data-driven decision made in thousands of HR departments every day. We shouldn’t be asking “Will I be replaced?” but rather “How do we rebuild the ladder?”
Framing this as a simple “fight for survival” where the solution is to “work harder” or “buy an AI course” is not only reductive—it’s often predatory. It ignores the structural reality of how firms use productivity gains to offset entry-level overhead. The solution won’t come from individual anxiety, but from a fundamental rethink of how we mentor and integrate the next generation into an AI-augmented workflow.
References & Further Reading
- Massenkoff, M. & McCrory, P. (2026). Labor market impacts of AI: A new measure and early evidence. Anthropic. (Focus on Figure 7 regarding youth entry rates). https://www.anthropic.com/research/labor-market-impacts
- Brynjolfsson, E., et al. (2025). Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence. Stanford Digital Economy Lab. https://digitaleconomy.stanford.edu/publication/canaries-in-the-coal-mine-six-facts-about-the-recent-employment-effects-of-artificial-intelligence/
- Eloundou, T., et al. (2024). GPTs are GPTs: An early look at the labor market impact potential of large language models. Science. https://arxiv.org/abs/2303.10130
- Casilli, A. (2025). Young Workers Haven’t Been Replaced by AI — Economists Are Just Looking for Them in the Wrong Places. A critical look at the task-based model. https://www.casilli.fr/2025/08/29/young-workers-havent-been-replaced-by-ai-economists-are-just-looking-for-them-in-the-wrong-places/