“The real bottleneck isn’t adoption. It’s trust. The AI can write the code. Most developers just aren’t sure they should ship it.”
Gino Ferrand, writing today from Santa Fe, New Mexico 🌞
At Microsoft, AI now writes between 20 and 30 percent of the code. Not in a sandbox. Not in an experiment. In production environments, across real engineering teams, working on real features.
The tooling is familiar by now. GitHub Copilot, ChatGPT Enterprise, and internal LLMs fine-tuned on proprietary codebases. Most of the time, these tools are just the starting point, the first draft. But increasingly, AI is doing more than scaffolding.
It is writing the test, suggesting the name, updating the docs, and generating entire files from one-line comments.
And this is not theoretical. Microsoft engineers are using these tools daily. GitHub reported that 62 percent of developers are now using AI in some form. That number is climbing, but the trust level is not.
Adoption is up. Confidence is not.
Almost half of developers surveyed by GitHub said they were still worried about accuracy, security, or both. And they should be.
As we have explored in past issues, AI is not just a productivity tool. It is a risk multiplier. Hallucinated packages, outdated patterns, subtle logic errors, these flaws are not rare. They are expected.
AI can write code that passes tests and still introduces vulnerabilities. It can suggest performant-looking functions that leak memory, miss edge cases, or use deprecated methods with cheerful confidence.
At Microsoft scale, 30 percent of your codebase being written by AI is not just a tooling stat. It is a cultural and architectural shift.
So who is responsible when AI writes the code?
Is it the developer who clicked “accept”? The team lead who skipped the review? The model that generated the suggestion? Or the vendor who built the model?
This is the real tension. As AI ownership of code increases, human ownership gets fuzzier. And in enterprise environments where legal, compliance, and security obligations stack up fast, fuzzy lines are dangerous.
The shift is not just about who writes the code. It is about who signs off on it.
Build faster with LATAM engineers who get AI
Hiring great developers used to be the bottleneck. Now it's about finding people who can move fast, collaborate across tools, and co-build with AI.
TECLA helps U.S. tech teams hire senior-level, English-proficient engineers from across Latin America at up to 60% less cost, and in your time zone.
Want to see how fast you could scale?
AI is not just accelerating development. It is distorting accountability.
When AI-generated code becomes the default starting point, team dynamics change. Juniors stop asking why. Mid-levels stop challenging structure. Reviews turn into spell-checks.
And eventually, you end up with a lot of code that works, but no one knows how or why it works that way.
That is not velocity. That is drift.
The next challenge for engineering leaders is visibility, not volume.
How much of your codebase was AI-generated? How much of it has been reviewed by a human with context? What percentage of AI code passes initial tests but fails in production?
These are the questions leadership should be asking now. Because as AI usage climbs, the illusion of productivity can mask a slow decline in code quality, resilience, and institutional knowledge.
The goal is not to slow down adoption. It is to design systems of accountability that scale with the tooling.
Trust does not mean clicking “accept” more often. It means knowing when not to.
More to come…
Recommended Reads
✔️ AI can fix typos but create alarming hidden threats: New study sounds warning for techies relying on chatbots for coding (The Economic Times)
✔️ The AI speed trap: why software quality is falling behind in the race to release (Tech Radar Pro)
– Gino Ferrand, Founder @ Tecla