"It doesn’t take a convoluted set of circumstances. Just typing one command most people use every day. That’s all it takes."
Gino Ferrand, writing today from Santa Fe, New Mexico 🏜️
Coding assistants promised to eliminate drudge work. Instead, they may be setting us up for the next great wave of software breaches.
The red flags are already waving.
In 2024, nearly 30% of all code being written was AI-generated. Today, some models hallucinate code 20% of the time. A recent University of Texas study showed that in over 2 million AI-generated snippets, more than 440,000 referenced non-existent packages. That’s not just a bug. That’s a weapon.
Here’s how: attackers are now registering those hallucinated package names with malware, waiting for a dev to blindly trust an AI suggestion and run npm install or pip install. One fake dependency. One click. Game over.
Spracklen, who led the UTSA study, called it "hallucination squatting"...a supply chain attack that exploits not the code itself, but the trust developers place in machines. And it’s not isolated. Across tools like Copilot, CodeWhisperer, and Claude, researchers found 65% of AI outputs were insecure on the first try.
SQL injection
XSS
Path traversal
Use of outdated crypto (MD5, SHA1)
Hardcoded secrets
Memory corruption in C/C++
Basically: the stuff you get grilled on in a secure coding interview. Except this time, it’s being generated confidently and at scale.
And most devs don’t catch it. Because these tools are fast. Convincing. They’re saving time. Until they’re not.
The future of software engineering isn’t just AI... it’s AI-powered teams. By combining AI-driven productivity with top-tier remote nearshore engineers, companies unlock exponential efficiency at a 40-60% lower cost, all while collaborating in the same time zone.
✅ AI supercharges senior engineers—faster development, fewer hires needed
✅ Nearshore talent = same time zones—real-time collaboration, no delays
✅ Elite engineering at significant savings—scale smarter, faster, better
We’ve already had warning shots.
Mercedes-Benz leaked internal code due to a hardcoded admin token.
Microsoft leaked 38TB of internal data after an AI project misconfigured an Azure SAS token.
A 2024 report showed 6.4% of repos using Copilot leaked secrets, versus 4.6% overall. That’s a 40% increase.
The "Rules File Backdoor" exploit showed how prompt injections can poison an entire dev org’s AI output.
These aren’t theoretical problems.
Every AI tool is producing insecure code today. Some more than others, sure... but none of them are safe by default. One study from Georgetown found that only 30% of generated C/C++ code was secure even after compilation. The rest? Buffer overflows, memory leaks, null derefs.
This isn’t just a developer issue. This is a leadership issue.
Do you know which tools your team is using?
Are AI outputs flagged for additional review?
Are your static analyzers catching AI-specific bug patterns?
Has your security team updated their threat model to include AI prompt injection or hallucination squatting?
If not, you’re probably already exposed. Because here’s the uncomfortable truth: AI code is treated with more trust than it deserves. Many developers, especially juniors, assume the tool knows better.
And sometimes it does.
But sometimes it writes insecure code, praises its own broken output, and recommends libraries that don’t exist. Just last month, a developer reported that ChatGPT congratulated their SQL injection vulnerability for being “secure and performant.”
This is the trap. AI code often works. That’s the danger.
It looks clean. It runs. It even passes basic tests.
But underneath the surface, it hides outdated patterns, insecure defaults, and logic flaws that a seasoned dev would never push to prod. And the volume is relentless. AI doesn’t slow down. It doesn’t get tired. It ships insecure code with the same energy and speed as the secure kind.
This is the first true security crisis of the AI coding era: the slow, silent insertion of vulnerabilities into modern software pipelines, happening every day, across every org that adopted AI tooling without guardrails.
No ransomware gang has exploited a hallucinated package yet.
But they’re watching.
And they're ready.
Which AI coding tools are safest (relatively speaking)
What mitigation strategies are working in 2025
How to train your dev team to work with AI securely
Until then, remember: just because it compiles, doesn’t mean it’s safe.
More to come...
✔️AI Hallucinated Packages Fool Unsuspecting Developers (SecurityWeek)
✔️AI Code Suggestions Sabotage Software Supply Chain (The Register)
✔️Year of the Twin Dragons: Developers Must Slay the Complexity and Security Issues of AI Coding Tools (SecurityWeek)
– Gino Ferrand, Founder @ TECLA