Skip to content

The Original Coder blog Posts

Featured Post

Welcome to my blog!

I’ll mostly be posting about software architecture & development since that’s what I know best. There will also be some posts about IT, the cloud, the Internet and similar stuff. Occasionally I may toss in a piece of trivia or a quote. If you have thoughts, ideas or questions please leave a comment!

Comments closed

The AI Reskilling Delusion

Former Commerce Secretary Gina Raimondo recently published an opinion piece in the New York Times arguing that America needs “a new grand bargain between the public and private sectors” to manage AI-driven workforce disruption. Her proposal: modular credentials, employer-led training programs, wage insurance and tax incentives to help workers transition between jobs. She envisions mid-career professionals cycling back to campus for short, affordable credentials that stack over time into degrees.

It’s a perfectly reasonable policy framework for a problem that no longer exists.

Raimondo — along with a growing chorus of policymakers, editorial boards and researchers — is pattern-matching to every previous wave of technological disruption. Automation eliminated farming jobs, but created factory jobs. Factories automated, but the service economy emerged. The internet killed travel agents but created web developers. The pattern has held for two centuries. Surely it will hold again.

It won’t. And the reason is straightforward once you stop and actually think about what AGI means.

This Is Not Automation

Every previous technology automated specific tasks. A loom automated weaving. A spreadsheet automated arithmetic. Even sophisticated software automates defined processes within narrow domains. These technologies couldn’t jump lanes. They were powerful but fundamentally limited in scope, which meant there were always adjacent spaces where humans remained necessary — and new categories of work that the technology itself created.

AGI is not a task-specific tool. By definition, artificial general intelligence can perform any cognitive work that any human can perform. It reasons. It plans. It learns. It adapts. It does all of this across every domain simultaneously. The closest analogy isn’t a better loom — it’s a machine that knows everything, can plan and reason, works around the clock and can be copied a thousand times over by lunch.

That distinction breaks the historical pattern completely, and anyone proposing workforce policy needs to grapple with it honestly.

The Math That Nobody Wants to Do

Let’s walk through the actual characteristics of an AGI workforce, because this is where the reskilling narrative falls apart.

An AGI works around the clock. No vacations, no sick days, no commute. It doesn’t spend half of Monday catching up on email or need annual HR compliance training. It produces output continuously.

It knows effectively everything. Not one field, not two — all of them. A human professional might spend a decade building deep expertise in a specialty. An AGI has that depth in every specialty from the moment it’s deployed.

It scales instantly. Need another team member? Spin up another instance. It has the exact same knowledge and capabilities as every other instance, immediately. No recruiting, no interviewing, no onboarding, no ramp-up period. Need fifty more? Done in minutes. Need fifty fewer? Shut them down. No severance, no unemployment insurance, no lawsuit risk.

It learns new material at machine speed. Any new job category that emerges — including ones created by AI itself — can be learned by an AGI in the time it takes to process the relevant information. Seconds to minutes, not months to years.

It costs a fraction of a human employee. And unlike salaries, technology costs trend downward over time.

Now compare that to Raimondo’s mid-career accountant going back to school for a four-month credential. What exactly is she going to learn in four months that an AGI doesn’t already know or can’t acquire in thirty seconds? This isn’t a rhetorical question. It’s the question that the entire reskilling framework cannot answer.

There Are No New Jobs

The standard rebuttal is that AI will create entirely new categories of work, just as every previous technology has. And in the short term, that’s true — we’re already seeing demand for AI engineers, prompt specialists and agent orchestrators. But these roles exist specifically because current AI systems are limited and require human guidance.

The moment AI reaches general capability, those roles evaporate along with everything else. An AGI can build and orchestrate AI agents better than any human can. It can engineer prompts better. It can architect AI systems better. The transitional jobs that AI creates are themselves automatable by the very technology that created them.

This is the part of the logic chain that the optimistic frameworks refuse to engage with. They acknowledge that AI can automate existing work, propose that new work will emerge to replace it and then simply stop thinking. They never ask the obvious follow-up: can AI also do the new work? If you’re talking about AGI, the answer is yes. That’s what the “general” in “general intelligence” means.

Physical Work Is Not a Safe Harbor Either

There’s an implicit assumption in these discussions that physical jobs will remain human territory. That assumption has an expiration date. Humanoid robotics has been progressing for years — the hardware has been ahead of the software for a long time. Now that AI is providing the cognitive layer these platforms have always lacked, progress is accelerating rapidly. Physical labor jobs will likely trail knowledge work displacement by only a few years.

The handful of roles that may resist full automation are the deeply personal, high-touch ones — hairstylists, massage therapists, certain nursing roles. Jobs where human connection is the actual product. These will persist, but they cannot absorb billions of displaced workers. They’re a rounding error against the scale of the problem.

There Is One Version of This That Works

I want to be clear about something. I’m not anti-AI. I work with this technology every day and I find it genuinely remarkable. And there is actually a world in which Raimondo’s proposals make perfect sense.

That world is one where we keep AI narrow.

Narrow AI — even superhuman narrow AI — is an amplifier, not a replacement. AlphaFold revolutionized protein structure prediction. AlphaZero mastered chess and Go beyond any human. Google’s GNoME is accelerating materials science discovery. These systems are extraordinarily capable but they can’t generalize. They can’t decide to go do something else. They make human researchers and professionals dramatically more productive without threatening to replace them wholesale. In that world, modular credentials and employer-led training programs and wage insurance are sensible responses. Workers reskill to leverage powerful but bounded tools. The historical pattern of displacement and adaptation continues to hold.

Even some degree of general AI is manageable. Current large language models are useful across many domains but they still require significant human oversight and judgment. They augment more than they replace. A workforce policy built around helping people use these tools effectively would be entirely reasonable.

The problem is that nobody in a position of influence is proposing we stay here.

The frontier labs are explicitly racing toward AGI — artificial intelligence that matches or exceeds human capability across all cognitive domains and can act autonomously. That’s not a fringe interpretation. It’s their stated mission. And progress has not stalled. If anything, the pace of capability improvement is accelerating. Major leaps in model capability arrived just in the last few months, and the intervals between breakthroughs are compressing, not expanding.

There is no wall. There is no plateau. And there is no serious policy effort anywhere in the world to draw a line between “AI that amplifies human workers” and “AI that replaces them.”

That’s what makes the reskilling narrative so maddening. It’s not wrong in principle — it’s wrong about which reality it’s addressing. Raimondo is writing detailed policy for a future that requires a precondition nobody is working to establish. It’s like selling umbrellas to people living downstream of a dam that’s visibly buckling under record floodwaters.

Raimondo writes, “I refuse to accept that an unemployment crisis is inevitable.” I’d actually agree with that — it isn’t inevitable. But avoiding it requires honestly confronting the trajectory we’re on and making deliberate choices about what AI we build and what we don’t. What it does not require is another workforce training program.

The reskilling narrative isn’t a plan. It’s a security blanket. And it’s worth asking who benefits from the rest of us holding onto it.

The trajectory of AI isn’t being shaped by workforce policy committees or university credential programs. It’s being shaped by a handful of people with very specific goals and very deep pockets. That’s a conversation worth having — and something I’ll be covering in the future.

Comments closed

Back From the Wilderness

It has been over six years since my last post. Life happened. Covid happened. Work happened. I got heads-down on a challenging role and let the blog go dark. That’s on me and I’m here to course correct.

What I’ve Been Doing

When I last posted in 2019, I had just started a new job doing Clojure development on Mac — a pretty significant departure for someone with roughly 14 years of Delphi followed by 16 years of C# and .NET on Windows. That role has lasted considerably longer than I originally expected. I’ve spent the intervening years building complex systems in Clojure on the JVM, working with cloud infrastructure and generally living deep in a technology stack I would not have predicted for myself.

Working on Mac full-time has been its own education. Under the hood it’s built on UNIX and that part is genuinely great — I routinely have several terminal windows open with multiple tabs in each and the low-level, non-GUI development experience is solid. The GUI side is another story. Finder is an exercise in frustration, window drag handles are elusive and Apple’s philosophical commitment to “there is one correct way to do everything and you will conform” runs directly counter to how I think software should work. I believe software should adapt to the user and present multiple ways of accomplishing the same task — menus, toolbars, keyboard shortcuts — and be customizable. Apple believes users should adapt to the software. We disagree. But that’s maybe a topic for its own post.

Where My Head Is At

AI is, by any reasonable measure, the most important shift in software development in decades. Possibly ever. My interest in it isn’t new — I’ve been thinking seriously about the trajectory of artificial intelligence since around 2000 and I’ve been anticipating something like what we’re seeing now for a long time. What’s changed in the last couple of years is that the tools have matured to the point where they’re practically useful and the rate of progress has become impossible to ignore even for skeptics.

My employer has made it very clear that understanding and leveraging AI isn’t optional — it’s expected. And I agree with them on that. So I’ve been investing significant time into going deep: agents, local LLM inference, security and hardening, infrastructure and the practical realities of building real systems around these technologies. This is where the industry is going and I plan to be ahead of it rather than chasing it.

What I’ll Be Writing About

This blog is going to be a place where I share what I’m learning, what I’m building and what I think about all of it. Honestly. The tone will be direct. I’m not going to sugar coat things, I’m not going to toe any corporate lines and I’m not going to pretend to agree with “best practices” that are performative rather than practical. Where the facts don’t line up with the popular consensus, I’m going to go with the facts.

Expect posts on AI — agents, tooling, local inference, security and what it’s actually like to stand up the infrastructure to support all of this. The technology stack will lean heavily on C# with .NET and TypeScript where the choice is mine to make, because they’re mature, productive, broadly capable platforms. But I also have years of Clojure experience and some rather pointed opinions that have been building up. Those will make an appearance.

I’ll also be writing about infrastructure, networking, self-hosting and hardware — because the local AI space is evolving fast and the decisions around it matter more than most people realize.

The Short Version

I’m back. I have things to say. Some of it will be useful, some of it will be opinionated and I’ll do my best to make sure all of it is honest.

More to come.

Comments closed

Status: .NET Layers & Clojure

Haven’t posted recently because I started a new job doing Clojure development on Mac targeting JVM on Linux hosted in AWS. In other words mostly stuff I haven’t done before so I’ve been focusing a lot of time getting up to speed on all of that (which is still a work in progress).

I am still actively working on my open source Layers library for .NET and have made considerable progress. Since its still alpha its been going through a lot of change / refactoring as I work to get it to where I want for moving forward. Most of the core layers capabilities are implemented (though not tested). Next I need to write a top layer implementation (such as .NET Core MVC WebAPI) and a bottom layer (such as Entity Framework) so that I can feed incoming requests into the layers stack and have them perform actual work. Once I have those I can start testing and working with it from an application perspective and go from there.

Due to the new Clojure work and the time that is taking it will likely take me longer than I had planned to get a working version of Layers released, but it will definitely happen. Plus I’ve also created some simply, very high level “Common” open source libraries for .NET as part of this that I think will be highly reusable for a lot of projects.

Comments closed

PGP & KeyBase.io

High quality security is a good thing. Many years ago (early 90s) Phil Zimmermann released this thing called Pretty Good Privacy (PGP) that used public & private keys to enable people to sign & encrypt data. Its a really good core design and very high security. When it came out I wrote an early Windows GUI to try and make it easier for most users and exchanged a couple of emails with Mr. Zimmermann. But unfortunately PGP never caught on because it was complicated to use and requires building a network of trusted keys (so that you know who a key really belongs to).

I just recently found out about Keybase.io which is a system that provides secure chat & file sharing and uses PGP at its core. It also acts as a system for distributing keys and providing the identity of those keys. I’ve only played around on it for a little bit but it seems like this might finally be a good, viable way to enable widespread use of PGP!

Comments closed

DI is not IoC

Dependency Injection (DI) helps to enable Inversion of Control (IoC), but DI itself is not IoC because truly moving the locus of control outward requires an architecture that wraps and invokes the DI. An example of this would be ASP.NET MVC which uses DI to instantiate the controllers and other objects needed to process web requests. That is IoC, but the libraries that implement DI (“containers”) are not themselves IoC containers they are DI containers. Calling them IoC containers is inaccurate.

Comments closed

CyberPunk 2077

I am not a fanboy. I don’t get excited about upcoming games. But OMG I’ve been following CyberPunk 2077 for awhile and it now has a release date! April 2020! For the first time ever I think I’m going to have to pre-order a Collector’s Edition game!

I’ll explain. Way, way, way back in college (in the 1090s) I got introduced to the CyberPunk RPG and I loved it. Then the CyberPunk 2020 edition came out and it was even better. I played RPGs when I was young but nothing ever captivated me as much as CyberPunk. I even loved the novels set in the genre (it is stil, to this day, my favorite genre for books & movies).

I don’t play games all that often but The Witcher III from CD Projekt Red was one of the best I ever played. Then I found out that their next title would be CyberPunk 2077. They are basing it on the CyberPunk 2020 RPG and they have the author of that game on staff helping them develop it! I have never in my life been this interested in a video game.

And then I was watching the latest trailer / video of the game and it was just as awesome as the ones I had seen before. Right up to the point that Kenu Reves shows up in the video right at the end as a character! Have you ever seen Johnny Mnomonic?

So this is what it feels like to be a fanboy. For this, I’m okay with it.

https://www.cyberpunk.net/us/en/

CyberPunk 2077 @ Steam

Comments closed

The Service Locator Pattern

Developers keep referring to Service Locator as an anti-pattern. If that is the case then ASP.NET MVC and every IoC container I’ve ever seen must be wrong because they use it.

The interface for accessing an IoC container is an implementation of the Service Locator pattern. You’re asking for some particular interface (aka a service) and its giving you back an instance (if it can).

Under the hood ASP.NET MVC uses a service locator (which almost always happens to be an IoC container) to new-up Controllers for handling incoming HTTP requests.

Service Locators can certainly be used incorrectly or where they should not, but they are not an anti-pattern. They are a specific tool in what should be an immense toolbox for solving certain types of problems. Sometimes they are the best choice. Sometimes they are a terrible choice. But the pattern itself is not at fault.

For more read Service Locator vs Dependency-Injection which goes into more detail and is also a very fun read. I’d mention the author’s name, but I can’t seen to find a name associated with the blog.

Comments closed

Original Coder Libraries w/Layers Architecture

I’ve just pushed a new version of the Original Coder Libraries up to GitHub that includes the first draft of the Layers library and architecture.

The libraries are hosted on GitHub: The Original Coder Libraries

This push includes the first version of the Layers architectural library I’ve been working on. It is based on similar architectures I’ve used on a few different projects in the past which proved to be very helpful. From a features and maturity standpoint this could probably be considered the 3rd incarnation (once they are completed, still in alpha).

The library makes it incredibly easy and efficient to build software systems using layers. Especially systems that deal with data that need CRUD (Create, Read, Update and Delete) operations. Using the library it will be possible to implement a full set of CRUD endpoints for a resource (an entity / database table / or the like) in about 100 lines of code.

I’ve included a project named LayerApiMockup that provides an example of what setting up and implementing will be like with the library. It still needs a bit of work and I need to add the add-on libraries for implementing specific technologies (Entity Framework, ASP.NET MVC, etc) but this is a good start.

1 Comment

The funniest IT story

I literally almost fell out of my chair laughing during the first part of this.

The case of the 500 -mile emaill

A long time ago I worked as a UNIX, Linux and network administrator. I’ve had to figure out some very odd networking issues and have also rewritten a sendmail.cf file from scratch. So maybe that’s partially why I found this story to be so very amusing.

Credit where credit is due, I came across this on Reddit:

Card Card

Comments closed

Senior Developers & S.O.L.I.D. Principals

The S.O.L.I.D. principals didn’t exist when I was learning to program and by the time I had heard of them I had been working at a senior level for years. Once I heard about them I had a quick look, thought they were all pretty obvious and paid no more attention.

Let me clarify that. The principals that make up S.O.L.I.D. are pretty basic stuff. Junior developers will get them wrong all the time. Mid-level developers will mostly get them right but will still make mistakes. By the time a developer starts to transition into a senior role they should always be applying these sorts of basic principals correctly and mostly automatically. A developer who has been working at the senior level for a few years should never even need to think about such things consciously.

Just like people don’t consciously think about how to walk/run, how to use punctuation when writing or how to park their car after they have been doing any of those for a few years. Experienced authors and drivers don’t think about any of the basics of doing those tasks. Which is why people can commute from home to work and not even be able to remember the drive. For this same reason the S.O.L.I.D. principals should be being applied at a subconscious level by senior developers.

Which just recently has become a pain for me. Now that S.O.L.I.D. is all the rage every interviewer seems compelled to drill developers on all of the details. Try to remember all of the low-level spelling and grammar rules you use when writing. If you’ve been out of school for at least a few years I bet you can’t remember most, if any, of them. You don’t need to anymore, just like I haven’t needed to think on such a basic level for years when programming. Those basic principals are automatic and forcing them back into the conscious level isn’t a benefit and doesn’t improve anything.

Interviews just a couple/few years ago weren’t asking such basic questions during senior level interviews. These types of basic interviewing questions are not going to land good senior developers. Someone who is book smart or spent time cramming before the interview could answer those even if they weren’t a developer. But the more in depth questions that can’t be crammed for and require real (senior level) experience not just book learning aren’t getting asked as often (or at all) during senior level interviews now.

Comments closed
Site and all contents Copyright © 2019 James B. Higgins. All Rights Reserved.