“It gets you 80 percent of the way there. But that last 20 percent is where all the landmines live.”

IT Pro team

Gino Ferrand, writing today from Santa Fe, New Mexico 🌞

A new study out of IT Pro paints a clear picture of the new reality inside modern dev teams. Senior engineers are leading the charge on AI adoption. They are 2.5 times more likely to use AI-generated code than junior developers.

But that is not the whole story.

Those same senior developers report spending up to 30 percent of their time debugging what the AI gives them. Parsing logic errors. Fixing hallucinated API calls. Rewriting output that passes tests but violates edge cases.

And junior developers? They are falling behind. In adoption, in trust, and in output. The gap is not just about tooling fluency. It is about the emergence of two very different developer experiences inside the same team.

AI makes the fast go faster. The rest try to keep up.

For senior devs, AI is a force multiplier. They know how to spot a lazy pattern. They know what smells wrong in a diff. They treat AI as a second brain, not a first draft. They still ship faster, even when it breaks. Because they know how to fix it.

But for junior devs, the equation is very different. They often do not know what the AI got wrong. They do not have the experience to spot a dangerous assumption, or the confidence to override it. Some follow bad output into hours of unproductive debugging. Others opt out entirely.

In teams where AI is now standard, this creates friction. The seniors are shipping fast, while juniors hesitate. And worse, they are learning slower. Because if the AI is doing the scaffolding, the junior is not getting the reps.

The apprenticeship model is breaking. Quietly.

Historically, junior developers learned by writing code, breaking things, and getting feedback. Now they are often asked to start from AI output, modify it, and fix it when it fails. But if you do not know what good code looks like in the first place, how do you know what to trust?

One manager shared that onboarding juniors with Copilot actually slowed them down. Not because the tool was bad, but because it skipped the struggle. They never developed instincts.

And this matters. Because instincts are what keep bad code out of production.

Build faster with LATAM engineers who get AI

Hiring great developers used to be the bottleneck. Now it's about finding people who can move fast, collaborate across tools, and co-build with AI.

TECLA helps U.S. tech teams hire senior-level, English-proficient engineers from across Latin America at up to 60% less cost, and in your time zone.

Want to see how fast you could scale?

AI is rewriting the path to seniority. But no one has updated the map.

If AI is the new baseline, then we need new ways to train. New ways to review. New definitions of mentorship. Otherwise, we end up with teams where the top 20 percent build and debug at speed, while the rest sit in the shadows, unsure whether to intervene or stay silent.

That is not a team. That is a bottleneck.

The worst-case scenario? Your juniors stop growing. Your seniors burn out. And no one can explain why the system is acting the way it does.

The fix is not less AI. It is more intentional AI.

Engineering leaders need to think about AI adoption like they think about tooling rollouts. Not everyone will use it the same way. Not everyone should.

Juniors need to be trained not just in syntax and systems, but in how to audit, challenge, and repair AI output. That means reviews that dig into decision-making. Pairing sessions that explain the why, not just the what. And perhaps most critically, spaces where it is safe to reject the AI entirely.

Because trust should not be the default. It should be earned. And that lesson goes for both the machine and the human still learning to code beside it.

More to come…

Gino Ferrand, Founder @ Tecla

Keep Reading

No posts found