“It feels like pair programming with a confident intern who’s wrong 30% of the time.”
Gino Ferrand, writing today from Santa Fe, New Mexico 🌞
The stats seem clean enough. Stack Overflow’s latest developer survey reports that 84% of devs are already using or planning to use AI tools every day. Just last year, that number was 76%. Adoption is not just climbing. It’s compounding.
But zoom in and things get messy.
Nearly half of developers say they don’t trust AI’s output. And almost the same number admit they’re spending time debugging its mistakes. The contradiction is sharp: developers are using these tools constantly while simultaneously doubting their accuracy and burning hours fixing what they break.
This is not a paradox. It’s a pattern.
When people are desperate for speed, they’ll tolerate unreliability. We’ve seen this before. Windows shipped with bugs. Early cloud infrastructure was fragile. But it moved faster, scaled better, and saved costs. So we used it anyway.
AI tools are following the same trajectory. Except this time, the bug is often invisible. It compiles. It runs. It passes a test or two. But behind the curtain, it’s insecure, logically brittle, or simply wrong.
And it’s not just a dev problem. It’s a leadership issue.
The core dynamic here is not about syntax. It is about trust. Developers are being nudged into a future where their day-to-day decisions are shaped by suggestions they did not ask for and cannot always verify. Every AI tool, whether it is Copilot, Claude, Cursor, or Cody, is designed to feel confident. But confidence is not competence. And speed is not safe.
We’ve already seen the consequences.
Studies have shown that some models hallucinate code 20% of the time, referencing packages that do not exist and recommending vulnerable patterns with a smile. In one UTSA study, over 440,000 code snippets contained fake dependencies. This is not just sloppy output. It is a new kind of software supply chain risk.
Even GitHub admits that non-AI users are now 60% slower than their AI-augmented peers. The pressure to use these tools is immense. But the process to use them safely is still under construction.
Build faster with LATAM engineers who get AI
Hiring great developers used to be the bottleneck. Now it's about finding people who can move fast, collaborate across tools, and co-build with AI.
TECLA helps U.S. tech teams hire senior-level, English-proficient engineers from across Latin America at up to 60% less cost, and in your time zone.
Want to see how fast you could scale?
So what are engineering leaders supposed to do?
You cannot put the genie back in the repo. AI is here. It is writing code, pushing PRs, and shaping workflows. But if you want to avoid turning your team into a debugging farm for a hallucinating robot, you need new norms.
Build a review culture. Automate security checks. Add guardrails inside the IDE. And most importantly, train your developers to think critically about machine suggestions, not just consume them.
Because here is the real risk: not that AI writes bad code, but that we stop noticing when it does.
More to come...
Recommended Reads
✔️ Developers Are Using AI More… But Trusting It Less (tech.co)
✔️ Vibe coding is the future… just don't trust it (yet) (Business Insider)
– Gino Ferrand, Founder @ Tecla