Skip to content

Month: March 2026

The AI Reskilling Delusion

Former Commerce Secretary Gina Raimondo recently published an opinion piece in the New York Times arguing that America needs “a new grand bargain between the public and private sectors” to manage AI-driven workforce disruption. Her proposal: modular credentials, employer-led training programs, wage insurance and tax incentives to help workers transition between jobs. She envisions mid-career professionals cycling back to campus for short, affordable credentials that stack over time into degrees.

It’s a perfectly reasonable policy framework for a problem that no longer exists.

Raimondo — along with a growing chorus of policymakers, editorial boards and researchers — is pattern-matching to every previous wave of technological disruption. Automation eliminated farming jobs, but created factory jobs. Factories automated, but the service economy emerged. The internet killed travel agents but created web developers. The pattern has held for two centuries. Surely it will hold again.

It won’t. And the reason is straightforward once you stop and actually think about what AGI means.

This Is Not Automation

Every previous technology automated specific tasks. A loom automated weaving. A spreadsheet automated arithmetic. Even sophisticated software automates defined processes within narrow domains. These technologies couldn’t jump lanes. They were powerful but fundamentally limited in scope, which meant there were always adjacent spaces where humans remained necessary — and new categories of work that the technology itself created.

AGI is not a task-specific tool. By definition, artificial general intelligence can perform any cognitive work that any human can perform. It reasons. It plans. It learns. It adapts. It does all of this across every domain simultaneously. The closest analogy isn’t a better loom — it’s a machine that knows everything, can plan and reason, works around the clock and can be copied a thousand times over by lunch.

That distinction breaks the historical pattern completely, and anyone proposing workforce policy needs to grapple with it honestly.

The Math That Nobody Wants to Do

Let’s walk through the actual characteristics of an AGI workforce, because this is where the reskilling narrative falls apart.

An AGI works around the clock. No vacations, no sick days, no commute. It doesn’t spend half of Monday catching up on email or need annual HR compliance training. It produces output continuously.

It knows effectively everything. Not one field, not two — all of them. A human professional might spend a decade building deep expertise in a specialty. An AGI has that depth in every specialty from the moment it’s deployed.

It scales instantly. Need another team member? Spin up another instance. It has the exact same knowledge and capabilities as every other instance, immediately. No recruiting, no interviewing, no onboarding, no ramp-up period. Need fifty more? Done in minutes. Need fifty fewer? Shut them down. No severance, no unemployment insurance, no lawsuit risk.

It learns new material at machine speed. Any new job category that emerges — including ones created by AI itself — can be learned by an AGI in the time it takes to process the relevant information. Seconds to minutes, not months to years.

It costs a fraction of a human employee. And unlike salaries, technology costs trend downward over time.

Now compare that to Raimondo’s mid-career accountant going back to school for a four-month credential. What exactly is she going to learn in four months that an AGI doesn’t already know or can’t acquire in thirty seconds? This isn’t a rhetorical question. It’s the question that the entire reskilling framework cannot answer.

There Are No New Jobs

The standard rebuttal is that AI will create entirely new categories of work, just as every previous technology has. And in the short term, that’s true — we’re already seeing demand for AI engineers, prompt specialists and agent orchestrators. But these roles exist specifically because current AI systems are limited and require human guidance.

The moment AI reaches general capability, those roles evaporate along with everything else. An AGI can build and orchestrate AI agents better than any human can. It can engineer prompts better. It can architect AI systems better. The transitional jobs that AI creates are themselves automatable by the very technology that created them.

This is the part of the logic chain that the optimistic frameworks refuse to engage with. They acknowledge that AI can automate existing work, propose that new work will emerge to replace it and then simply stop thinking. They never ask the obvious follow-up: can AI also do the new work? If you’re talking about AGI, the answer is yes. That’s what the “general” in “general intelligence” means.

Physical Work Is Not a Safe Harbor Either

There’s an implicit assumption in these discussions that physical jobs will remain human territory. That assumption has an expiration date. Humanoid robotics has been progressing for years — the hardware has been ahead of the software for a long time. Now that AI is providing the cognitive layer these platforms have always lacked, progress is accelerating rapidly. Physical labor jobs will likely trail knowledge work displacement by only a few years.

The handful of roles that may resist full automation are the deeply personal, high-touch ones — hairstylists, massage therapists, certain nursing roles. Jobs where human connection is the actual product. These will persist, but they cannot absorb billions of displaced workers. They’re a rounding error against the scale of the problem.

There Is One Version of This That Works

I want to be clear about something. I’m not anti-AI. I work with this technology every day and I find it genuinely remarkable. And there is actually a world in which Raimondo’s proposals make perfect sense.

That world is one where we keep AI narrow.

Narrow AI — even superhuman narrow AI — is an amplifier, not a replacement. AlphaFold revolutionized protein structure prediction. AlphaZero mastered chess and Go beyond any human. Google’s GNoME is accelerating materials science discovery. These systems are extraordinarily capable but they can’t generalize. They can’t decide to go do something else. They make human researchers and professionals dramatically more productive without threatening to replace them wholesale. In that world, modular credentials and employer-led training programs and wage insurance are sensible responses. Workers reskill to leverage powerful but bounded tools. The historical pattern of displacement and adaptation continues to hold.

Even some degree of general AI is manageable. Current large language models are useful across many domains but they still require significant human oversight and judgment. They augment more than they replace. A workforce policy built around helping people use these tools effectively would be entirely reasonable.

The problem is that nobody in a position of influence is proposing we stay here.

The frontier labs are explicitly racing toward AGI — artificial intelligence that matches or exceeds human capability across all cognitive domains and can act autonomously. That’s not a fringe interpretation. It’s their stated mission. And progress has not stalled. If anything, the pace of capability improvement is accelerating. Major leaps in model capability arrived just in the last few months, and the intervals between breakthroughs are compressing, not expanding.

There is no wall. There is no plateau. And there is no serious policy effort anywhere in the world to draw a line between “AI that amplifies human workers” and “AI that replaces them.”

That’s what makes the reskilling narrative so maddening. It’s not wrong in principle — it’s wrong about which reality it’s addressing. Raimondo is writing detailed policy for a future that requires a precondition nobody is working to establish. It’s like selling umbrellas to people living downstream of a dam that’s visibly buckling under record floodwaters.

Raimondo writes, “I refuse to accept that an unemployment crisis is inevitable.” I’d actually agree with that — it isn’t inevitable. But avoiding it requires honestly confronting the trajectory we’re on and making deliberate choices about what AI we build and what we don’t. What it does not require is another workforce training program.

The reskilling narrative isn’t a plan. It’s a security blanket. And it’s worth asking who benefits from the rest of us holding onto it.

The trajectory of AI isn’t being shaped by workforce policy committees or university credential programs. It’s being shaped by a handful of people with very specific goals and very deep pockets. That’s a conversation worth having — and something I’ll be covering in the future.

Comments closed

Back From the Wilderness

It has been over six years since my last post. Life happened. Covid happened. Work happened. I got heads-down on a challenging role and let the blog go dark. That’s on me and I’m here to course correct.

What I’ve Been Doing

When I last posted in 2019, I had just started a new job doing Clojure development on Mac — a pretty significant departure for someone with roughly 14 years of Delphi followed by 16 years of C# and .NET on Windows. That role has lasted considerably longer than I originally expected. I’ve spent the intervening years building complex systems in Clojure on the JVM, working with cloud infrastructure and generally living deep in a technology stack I would not have predicted for myself.

Working on Mac full-time has been its own education. Under the hood it’s built on UNIX and that part is genuinely great — I routinely have several terminal windows open with multiple tabs in each and the low-level, non-GUI development experience is solid. The GUI side is another story. Finder is an exercise in frustration, window drag handles are elusive and Apple’s philosophical commitment to “there is one correct way to do everything and you will conform” runs directly counter to how I think software should work. I believe software should adapt to the user and present multiple ways of accomplishing the same task — menus, toolbars, keyboard shortcuts — and be customizable. Apple believes users should adapt to the software. We disagree. But that’s maybe a topic for its own post.

Where My Head Is At

AI is, by any reasonable measure, the most important shift in software development in decades. Possibly ever. My interest in it isn’t new — I’ve been thinking seriously about the trajectory of artificial intelligence since around 2000 and I’ve been anticipating something like what we’re seeing now for a long time. What’s changed in the last couple of years is that the tools have matured to the point where they’re practically useful and the rate of progress has become impossible to ignore even for skeptics.

My employer has made it very clear that understanding and leveraging AI isn’t optional — it’s expected. And I agree with them on that. So I’ve been investing significant time into going deep: agents, local LLM inference, security and hardening, infrastructure and the practical realities of building real systems around these technologies. This is where the industry is going and I plan to be ahead of it rather than chasing it.

What I’ll Be Writing About

This blog is going to be a place where I share what I’m learning, what I’m building and what I think about all of it. Honestly. The tone will be direct. I’m not going to sugar coat things, I’m not going to toe any corporate lines and I’m not going to pretend to agree with “best practices” that are performative rather than practical. Where the facts don’t line up with the popular consensus, I’m going to go with the facts.

Expect posts on AI — agents, tooling, local inference, security and what it’s actually like to stand up the infrastructure to support all of this. The technology stack will lean heavily on C# with .NET and TypeScript where the choice is mine to make, because they’re mature, productive, broadly capable platforms. But I also have years of Clojure experience and some rather pointed opinions that have been building up. Those will make an appearance.

I’ll also be writing about infrastructure, networking, self-hosting and hardware — because the local AI space is evolving fast and the decisions around it matter more than most people realize.

The Short Version

I’m back. I have things to say. Some of it will be useful, some of it will be opinionated and I’ll do my best to make sure all of it is honest.

More to come.

Comments closed
Site and all contents Copyright © 2019 James B. Higgins. All Rights Reserved.