“It’s efficient. But if I don’t trust the code, I end up rewriting it anyway. So what did I really save?”

Enterprise Tools Division

Gino Ferrand, writing today from Santa Fe, New Mexico 🌞

Microsoft just confirmed what most engineering leaders already suspected. AI is not an experiment anymore. It is shipping real code.

According to recent disclosures and GitHub telemetry, AI tools like Copilot and ChatGPT are now responsible for 20 to 30 percent of all code written at Microsoft. That is not in pilots. That is across production teams. GitHub also reports that over 62 percent of developers are now using AI tools regularly in their workflow.

On paper, this looks like a clear win. Fewer keystrokes. Faster commits. More shipped features. But the numbers hide something more complicated.

Almost half of those same developers still do not trust the code the AI is writing.

The trust gap is the real story

Engineers are adopting the tools, yes. But adoption is not the same as confidence. And it is certainly not the same as production-ready reliability.

Ask around, and the feedback sounds familiar. “It gets me started, but I have to double-check everything.” Or, “I use it, but only for boilerplate.” In some teams, Copilot is writing entire modules. In others, it is a glorified clipboard.

Why? Because even when the output looks clean, the underlying decisions are opaque. AI-generated code often compiles, passes tests, and even follows formatting standards. But it can still make poor architectural calls, skip edge cases, or suggest outdated patterns that no one on the team would approve.

And the more invisible those mistakes are, the harder it is to catch them. Until it is too late.

AI productivity without review is a ticking clock

At 30 percent code volume, the AI is no longer a sidekick. It is a contributor. But it is one that never explains itself. Never raises a flag. Never says, “Are you sure?”

That creates two problems.

First, engineers become reviewers of machine output, rather than creators of original solutions. That is fine for some teams. But over time, it shifts the cognitive burden from solving the problem to guessing what the AI got wrong.

Second, accountability gets blurry. If a bug is introduced by the model, but approved by a human, who owns it? If security is compromised by an AI import, but passed CI, who gets paged?

The more AI contributes, the more we need processes that clarify ownership and validate trust.

Build faster with LATAM engineers who get AI

Hiring great developers used to be the bottleneck. Now it's about finding people who can move fast, collaborate across tools, and co-build with AI.

TECLA helps U.S. tech teams hire senior-level, English-proficient engineers from across Latin America at up to 60% less cost, and in your time zone.

Want to see how fast you could scale?

Leadership can’t afford to treat AI as just another dev tool

At 5 percent AI-written code, it is a toy. At 15 percent, it is an assistant. At 30 percent, it is an unreviewed teammate pushing commits to prod.

This changes the role of tech leads, security reviewers, and even sprint planning. If the AI suggests ten solutions in five minutes, who is responsible for validating which one is right? If 30 percent of your codebase came from Copilot, what does that do to onboarding, incident response, or refactor strategy?

It is not just about writing faster. It is about understanding what was written, who wrote it, and whether your team trusts it enough to own it.

You can’t scale speed without scaling trust

For engineering leaders, the priority now is not adoption. It is alignment between what the AI produces and what the team actually needs to ship. That means:

  • More emphasis on code review, not less

  • Clear lines of responsibility for AI-generated output

  • Investment in tooling that surfaces how and why the AI made certain decisions

  • Cultural norms that encourage developers to challenge, not blindly accept, suggestions

AI is in the repo now. It is writing code. It is influencing architecture. And soon, it may be suggesting entire product flows.

But until teams trust what it is writing, the real velocity gains will stay theoretical.

More to come…

Gino Ferrand, Founder @ Tecla

Keep Reading

No posts found