"The bitterness of poor quality remains long after the sweetness of low price is forgotten."
Gino Ferrand, writing today from Seattle, WA 🌄
If Part 1 exposed the cracks in AI-generated code, this one’s about how engineering teams are trying to patch them...before the structure weakens.
Across the industry, leaders aren’t rejecting AI...they’re working around it, over it, and sometimes despite it. The tools are powerful. But without process, policy, and accountability, they can become powerful liabilities.
To stay ahead of the quality curve, smart teams have adopted a new mantra: treat AI-generated code as a draft, not a deliverable.
I recently heard a senior engineer in an interview sum it up well: "AI-written code needs the same refactoring and cleanup as anything else—it’s not drop-in perfection".
This principle underlies most of the emerging best practices:
Code reviews are stricter. Some teams annotate AI-generated pull requests so reviewers know to give them extra scrutiny.
Mission-critical systems often prohibit unreviewed AI contributions entirely.
Prompt hygiene is becoming a thing...developers are being coached on how to ask for secure, maintainable, well-documented code instead of just functional code.
Test coverage expectations are rising. Engineers often ask the AI to generate both the function and its tests, then review both for gaps. Fuzz testing and static analysis help catch what AI misses.
Guardrails like these aren’t optional...they’re the only way to avoid accumulating mess at machine speed. As one report in CIO.com put it, "developers may have false faith in AI-generated code that necessitates closer review".
Some companies have gone further. Infosys, for instance, adopted Copilot widely, but paired it with a rule: no AI-generated code enters production without senior approval. Microsoft’s internal practices encourage using AI to scaffold features, but insist on manual integration and full test coverage before merging.
Even GitHub, the home of Copilot, is careful with how it frames usage. In a 2024 case study with Accenture, they reported a 15% higher PR merge rate and 84% more successful builds among Copilot users...but only when the teams were already following strong engineering practices. The AI helped, but it didn’t replace due diligence.
Security teams are also tightening the reins. A study on GitHub Copilot’s code output, published on arXiv.org, found that 32.8% of Python and 24.5% of JavaScript suggestions introduced security flaws. Organizations dealing with sensitive data are responding with policies that ban AI-written code from security-critical components unless reviewed line-by-line.
Some are even experimenting with two-model workflows: one AI to write the code, another to critique it. Researchers have proposed frameworks like CriticGPT, and Amazon’s CodeWhisperer now includes real-time security scanning for its suggestions.
Beyond the code itself, there’s a cultural shift underway. CTOs report that junior developers are embracing AI with enthusiasm, while seniors are more cautious. This divide has led to internal debates over skill atrophy. Are we training a generation of engineers who can prompt but not program?
Mitch Ashley, VP at Techstrong Research, called it out plainly: "AI-induced technical debt is real if teams have overinflated expectations. You still need to polish, test, and refactor—no exceptions".
The future of software engineering isn’t just AI... it’s AI-powered teams. By combining AI-driven productivity with top-tier remote nearshore engineers, companies unlock exponential efficiency at a 40-60% lower cost, all while collaborating in the same time zone.
✅ AI supercharges senior engineers—faster development, fewer hires needed
✅ Nearshore talent = same time zones—real-time collaboration, no delays
✅ Elite engineering at significant savings—scale smarter, faster, better
Some companies are even labeling AI-authored code in changelogs, so future maintainers know to approach it with extra care. Others have added CI rules that flag AI-generated contributions for review, or require extra approvals. It’s not about blocking the AI...it’s about surfacing the risk.
And let’s talk tools. The usual suspects...ESLint, SonarQube, CodeQL...are being joined by AI-aware linters that track duplication, churn, and comment quality. Platforms like GitClear, Code Climate, and LinearB are offering metrics to compare AI-authored commits against human ones. If AI is bloating your codebase or lowering quality, these tools will show you the receipts.
In production, AI-written features are often deployed behind feature flags, with tighter observability. If something goes sideways, the rollback is ready. The new rule of thumb: deploy AI code like it’s radioactive until proven stable.
And through it all, the message is clear. AI isn’t replacing developers...it’s demanding more from them. More judgment. More review. More ownership.
As one Google engineering lead said in a team memo I recently read: "Code quality isn’t negotiable. If AI helps you move faster, great. But you still own what you ship."
In this new era, that might be the most important line of code in the entire stack.
✔️ "How to Use AI in Coding - 12 Best Practices in 2025" (Zencoder.ai)
✔️ "AI Code Generation: The Risks and Benefits of AI in Software" (Legit Security)
✔️ "How AI Can Help You Write Better Code" (Medium)
– Gino Ferrand, Founder @ TECLA