Former Commerce Secretary Gina Raimondo recently published an opinion piece in the New York Times arguing that America needs “a new grand bargain between the public and private sectors” to manage AI-driven workforce disruption. Her proposal: modular credentials, employer-led training programs, wage insurance and tax incentives to help workers transition between jobs. She envisions mid-career professionals cycling back to campus for short, affordable credentials that stack over time into degrees.
It’s a perfectly reasonable policy framework for a problem that no longer exists.
Raimondo — along with a growing chorus of policymakers, editorial boards and researchers — is pattern-matching to every previous wave of technological disruption. Automation eliminated farming jobs, but created factory jobs. Factories automated, but the service economy emerged. The internet killed travel agents but created web developers. The pattern has held for two centuries. Surely it will hold again.
It won’t. And the reason is straightforward once you stop and actually think about what AGI means.
This Is Not Automation
Every previous technology automated specific tasks. A loom automated weaving. A spreadsheet automated arithmetic. Even sophisticated software automates defined processes within narrow domains. These technologies couldn’t jump lanes. They were powerful but fundamentally limited in scope, which meant there were always adjacent spaces where humans remained necessary — and new categories of work that the technology itself created.
AGI is not a task-specific tool. By definition, artificial general intelligence can perform any cognitive work that any human can perform. It reasons. It plans. It learns. It adapts. It does all of this across every domain simultaneously. The closest analogy isn’t a better loom — it’s a machine that knows everything, can plan and reason, works around the clock and can be copied a thousand times over by lunch.
That distinction breaks the historical pattern completely, and anyone proposing workforce policy needs to grapple with it honestly.
The Math That Nobody Wants to Do
Let’s walk through the actual characteristics of an AGI workforce, because this is where the reskilling narrative falls apart.
An AGI works around the clock. No vacations, no sick days, no commute. It doesn’t spend half of Monday catching up on email or need annual HR compliance training. It produces output continuously.
It knows effectively everything. Not one field, not two — all of them. A human professional might spend a decade building deep expertise in a specialty. An AGI has that depth in every specialty from the moment it’s deployed.
It scales instantly. Need another team member? Spin up another instance. It has the exact same knowledge and capabilities as every other instance, immediately. No recruiting, no interviewing, no onboarding, no ramp-up period. Need fifty more? Done in minutes. Need fifty fewer? Shut them down. No severance, no unemployment insurance, no lawsuit risk.
It learns new material at machine speed. Any new job category that emerges — including ones created by AI itself — can be learned by an AGI in the time it takes to process the relevant information. Seconds to minutes, not months to years.
It costs a fraction of a human employee. And unlike salaries, technology costs trend downward over time.
Now compare that to Raimondo’s mid-career accountant going back to school for a four-month credential. What exactly is she going to learn in four months that an AGI doesn’t already know or can’t acquire in thirty seconds? This isn’t a rhetorical question. It’s the question that the entire reskilling framework cannot answer.
There Are No New Jobs
The standard rebuttal is that AI will create entirely new categories of work, just as every previous technology has. And in the short term, that’s true — we’re already seeing demand for AI engineers, prompt specialists and agent orchestrators. But these roles exist specifically because current AI systems are limited and require human guidance.
The moment AI reaches general capability, those roles evaporate along with everything else. An AGI can build and orchestrate AI agents better than any human can. It can engineer prompts better. It can architect AI systems better. The transitional jobs that AI creates are themselves automatable by the very technology that created them.
This is the part of the logic chain that the optimistic frameworks refuse to engage with. They acknowledge that AI can automate existing work, propose that new work will emerge to replace it and then simply stop thinking. They never ask the obvious follow-up: can AI also do the new work? If you’re talking about AGI, the answer is yes. That’s what the “general” in “general intelligence” means.
Physical Work Is Not a Safe Harbor Either
There’s an implicit assumption in these discussions that physical jobs will remain human territory. That assumption has an expiration date. Humanoid robotics has been progressing for years — the hardware has been ahead of the software for a long time. Now that AI is providing the cognitive layer these platforms have always lacked, progress is accelerating rapidly. Physical labor jobs will likely trail knowledge work displacement by only a few years.
The handful of roles that may resist full automation are the deeply personal, high-touch ones — hairstylists, massage therapists, certain nursing roles. Jobs where human connection is the actual product. These will persist, but they cannot absorb billions of displaced workers. They’re a rounding error against the scale of the problem.
There Is One Version of This That Works
I want to be clear about something. I’m not anti-AI. I work with this technology every day and I find it genuinely remarkable. And there is actually a world in which Raimondo’s proposals make perfect sense.
That world is one where we keep AI narrow.
Narrow AI — even superhuman narrow AI — is an amplifier, not a replacement. AlphaFold revolutionized protein structure prediction. AlphaZero mastered chess and Go beyond any human. Google’s GNoME is accelerating materials science discovery. These systems are extraordinarily capable but they can’t generalize. They can’t decide to go do something else. They make human researchers and professionals dramatically more productive without threatening to replace them wholesale. In that world, modular credentials and employer-led training programs and wage insurance are sensible responses. Workers reskill to leverage powerful but bounded tools. The historical pattern of displacement and adaptation continues to hold.
Even some degree of general AI is manageable. Current large language models are useful across many domains but they still require significant human oversight and judgment. They augment more than they replace. A workforce policy built around helping people use these tools effectively would be entirely reasonable.
The problem is that nobody in a position of influence is proposing we stay here.
The frontier labs are explicitly racing toward AGI — artificial intelligence that matches or exceeds human capability across all cognitive domains and can act autonomously. That’s not a fringe interpretation. It’s their stated mission. And progress has not stalled. If anything, the pace of capability improvement is accelerating. Major leaps in model capability arrived just in the last few months, and the intervals between breakthroughs are compressing, not expanding.
There is no wall. There is no plateau. And there is no serious policy effort anywhere in the world to draw a line between “AI that amplifies human workers” and “AI that replaces them.”
That’s what makes the reskilling narrative so maddening. It’s not wrong in principle — it’s wrong about which reality it’s addressing. Raimondo is writing detailed policy for a future that requires a precondition nobody is working to establish. It’s like selling umbrellas to people living downstream of a dam that’s visibly buckling under record floodwaters.
Raimondo writes, “I refuse to accept that an unemployment crisis is inevitable.” I’d actually agree with that — it isn’t inevitable. But avoiding it requires honestly confronting the trajectory we’re on and making deliberate choices about what AI we build and what we don’t. What it does not require is another workforce training program.
The reskilling narrative isn’t a plan. It’s a security blanket. And it’s worth asking who benefits from the rest of us holding onto it.
The trajectory of AI isn’t being shaped by workforce policy committees or university credential programs. It’s being shaped by a handful of people with very specific goals and very deep pockets. That’s a conversation worth having — and something I’ll be covering in the future.