“Even in the age of advanced AI, programming demands too much creativity and judgment to be fully automated.”

Bill Gates, Gates Notes

Gino Ferrand, writing today from Austin, TX 🌞

In 2025, we’re not just seeing AI autocomplete your functions. It’s rewriting tests, scaffolding components, reviewing PRs. And in a few orgs, it’s already shipping to production with minimal human oversight.

So when Bill Gates declared this week that programming will remain a “human profession for centuries,” it felt, frankly, out of sync.

Or did it?

At first glance, Gates sounds like he’s resisting the tide. But look closer, and you’ll see a deeper logic. The kind of logic that engineering leaders, especially those staring down shrinking headcount and rising AI expectations, might want to examine carefully.

Because the question isn’t whether AI can write code.

It’s whether it can write good code, in context, under pressure, at scale, and in collaboration with humans who don’t always know what they want.

And so far, the answer is... not yet.

The illusion of infinite juniors

Codex, Cursor, Claude, Copilot, they’re framed as junior devs. Fast. Helpful. Cheap. Sometimes even smart. But they lack one thing every senior engineer has: judgment.

They hallucinate dependencies that don’t exist. They write secure-sounding code that’s riddled with vulnerabilities. They praise SQL injection bugs for being performant.

And when things break, they don’t explain why. They don’t learn. They don’t take responsibility.

Which means your humans still have to.

Even in AI-forward teams, those shipping with help from Copilot or Codex, the AI is not the pilot. It’s the intern. And someone still has to oversee the interns.

The myth of replacement

Here’s the real reason Gates is probably right.

The further you go up the stack, from scaffolding to architecture, from tickets to tradeoffs, the more programming looks less like instruction and more like negotiation.

Negotiating with legacy systems. With incomplete specs. With unpredictable users. With time. With scope. With a CTO who wants everything shipped yesterday and secure tomorrow.

AI can suggest solutions. But it can’t yet argue for the right one. It can’t ask annoying questions. It doesn’t challenge the spec. And it doesn’t say, “This whole module needs to be rethought.”

That’s human territory. Still.

Build faster with LATAM engineers who get AI

Hiring great developers used to be the bottleneck. Now it's about finding people who can move fast, collaborate across tools, and co-build with AI.

TECLA helps U.S. tech teams hire senior-level, English-proficient engineers from across Latin America at up to 60% less cost, and in your time zone.

Want to see how fast you could scale?

The rise of prompt-driven leadership

That doesn’t mean engineering roles are static.

The best engineers I know aren’t just writing code anymore. They’re describing systems well enough that AI can scaffold them. They’re acting as translators between business logic and generative syntax. They’re curating output instead of authoring everything from scratch.

It’s not about knowing the right answer. It’s about knowing when the answer smells off. And that kind of knowing, what Gates calls “judgment,” isn’t going away.

If anything, it’s becoming more valuable.

The takeaway

Will junior dev roles shrink? Probably. Will prompt engineering eat up the low-skill end of software work? It already is.

But the hard parts of engineering; the ambiguous parts, the judgment calls, the tradeoff decisions, those are still human.

Bill Gates isn’t clinging to nostalgia. He’s pointing to reality. We’re building with AI now, yes. But we’re also babysitting it. Correcting it. Auditing it.

And for the foreseeable future, someone still has to take responsibility for what ships.

So maybe programming is the last human standing.

More to come...

Gino Ferrand, Founder @ Tecla

Keep Reading

No posts found