“We’re not just seeing attacks on infrastructure. We’re seeing attacks on reality.”

Clint Watts, GM of Microsoft Threat Analysis Center

Gino Ferrand, writing today from Mexico City 🇲🇽

For years, the idea that AI would be used in cyberwarfare felt like science fiction. Something out of a Netflix thriller. Nation-states manipulating data, impersonating identities, breaking into networks with tools smarter than any red team.

Now it’s just Wednesday.

Last week, Microsoft released a sobering report: Russia, China, Iran, and North Korea are actively using generative AI to power their cyber operations. The tactics are varied. Phishing campaigns, identity spoofing, deepfake disinformation, and AI-assisted code used to probe for vulnerabilities. But the strategy is clear. LLMs are not just helping developers. They are helping adversaries.

And it’s happening faster than we thought.

Attack surfaces are multiplying, not shrinking

The rise of AI tooling has changed who can launch an attack and how fast.

It used to take a skilled engineer to craft a convincing spear phishing email or debug an exploit chain. Now, a junior operator with access to a model can spin up entire campaigns in minutes. Need to mimic a government official’s speech patterns? Done. Want to generate believable technical docs to phish engineers at a Fortune 500 company? Easy.

Generative AI has lowered the barrier to entry. But it has also raised the ceiling.

Microsoft’s threat intelligence teams reported that state actors are blending AI-generated content with traditional espionage techniques. AI is not replacing their playbook. It is upgrading it.

And that should worry everyone in engineering leadership.

Security models built for humans are failing against machines

The problem isn’t just that these attacks are more convincing. It’s that they’re more frequent, more automated, and harder to trace.

Imagine a fake job post targeting engineers with a malicious repo link. Except the post was written by a model trained on your company’s style guide. The recruiter’s profile picture? AI-generated. The project pitch? Pulled from scraped internal docs on GitHub.

None of it is real. But it looks close enough.

AI is being used to impersonate engineers, craft malicious prompts, and inject vulnerabilities through trust channels that previously felt safe. LinkedIn DMs, open source contribution invites, even Slack intros.

The traditional guardrails of security awareness (skepticism, visual verification, human intuition) are breaking down. Because the attackers don’t sound like scammers anymore. They sound like your coworkers.

Build faster with LATAM engineers who get AI

Hiring great developers used to be the bottleneck. Now it's about finding people who can move fast, collaborate across tools, and co-build with AI.

TECLA helps U.S. tech teams hire senior-level, English-proficient engineers from across Latin America at up to 60% less cost, and in your time zone.

Want to see how fast you could scale?

Your developers are now targets. And vectors.

The era of targeting sysadmins is over. Today, developers are the new attack surface.

Why? Because they have access. They trust package managers. They paste code from chatbots. And they move fast.

AI is helping attackers exploit that behavior. Microsoft’s report noted a surge in attempts to trick devs into installing poisoned dependencies or accepting PRs laced with subtle backdoors. Some campaigns even targeted specific LLM plugins and extensions, hoping to hijack workflows before anyone noticed.

The line between dev tool and attack vector is getting blurry. And the faster your team adopts AI tooling without oversight, the more exposed you are.

So what should engineering leaders do?

This isn’t a call to ban AI. That ship has sailed. Your engineers are using Copilot, ChatGPT, Claude, and a dozen other tools whether you have signed off or not.

But this is a call to lead.

Audit your AI use. Set policies. Establish threat models that assume AI-generated content can and will be used against your team. Educate developers on the new threat patterns. Not with slide decks, but with real examples. And stop assuming that traditional security controls will catch LLM-powered attacks. They won’t.

If Microsoft’s report is right, we are entering a new phase of cyberwarfare. One where the battleground is not just firewalls and endpoints, but models and prompts.

And in that world, your dev team isn’t just part of the company. They’re part of the perimeter.

More to come...

Gino Ferrand, Founder @ Tecla

Keep Reading

No posts found