“A 3 percent failure rate may not sound like much. But in national security, that’s a disaster.”

Jonathan Spring, Senior Advisor, CISA

Gino Ferrand, writing today from Santa Fe, New Mexico 🌞

The code was clean. The specs matched. The compiler gave it a green light.

But buried deep in the chip’s logic, invisible to most engineers, was a trojan.

Not a metaphor. A literal hardware trojan. A malicious gate, hidden in a Verilog file, designed to open under very specific conditions. The kind of thing that could leak data, brick a system, or quietly reroute control signals during a critical moment.

Until now, catching these attacks was painfully manual. A mix of static analysis, spot reviews, and trust. But a new research project may have just changed the game.

The tool is called PEARL. And it is using AI to scan chip designs with 97 percent accuracy.

AI is learning to see what humans miss

PEARL analyzes Verilog code, the HDL used to design chips, and flags potential hardware trojans by learning patterns that don’t match clean implementations. It works the same way LLMs analyze natural language. Except the language here is silicon logic.

The researchers trained it on known trojan examples and legitimate designs. It now detects malicious modifications with 97.2 percent accuracy. A huge leap over traditional techniques.

This matters. Because hardware trojans are not theoretical. They have been found in military systems, telecom infrastructure, and consumer electronics. And in a world of increasingly global supply chains, more chips are passing through untrusted hands than ever before.

But that 3 percent? That’s the part we need to talk about

Let’s be clear. 97 percent accuracy sounds impressive. In most machine learning contexts, that would be a win. But this is not spam detection. It’s not predicting Netflix shows.

This is national defense. Financial systems. Medical devices. Aircraft.

And if even 3 percent of trojans slip through undetected, that means the attack surface remains terrifyingly real. PEARL is a breakthrough, but it is not a silver bullet.

As one hardware security expert put it, “We don’t need better guesses. We need guarantees.”

The paradox of AI-driven security

AI can now do what humans can’t: process thousands of lines of HDL, learn the hidden syntax of attack, and scale that insight across designs.

But the very use of AI in the hardware lifecycle introduces a new layer of risk. The models themselves can be poisoned. The training data can be compromised. The AI pipeline becomes its own supply chain. And like any tool, it can be co-opted.

Imagine an AI designed to scan for backdoors, quietly tweaked to ignore one specific pattern. You now have a model that flags every vulnerability except the one your adversary built.

That’s not just a risk. It’s a strategy.

Hiring engineers who understand AI isn’t optional anymore, especially when the threat is in the code, not just the network.

 For U.S. engineering leaders building hardware-adjacent or AI-integrated systems, the risk isn’t just in the model... it’s in the people who don’t know how to guide it. That’s why many teams are turning to AI-enabled nearshore engineers, senior developers trained to build and test with AI tools, without cutting corners.

At TECLA, we help companies hire vetted engineers across Latin America in the same time zone, fluent in English, and ready to co-build with AI. It’s faster, safer, and up to 60% more cost-efficient than traditional hiring.

So where does this leave engineering leaders?

It’s another wake-up call. Security is no longer just a software problem. It’s a design-time problem. A fabrication problem. A vendor problem. And now, an AI problem.

If your company builds anything that touches hardware, whether that’s IoT, robotics, edge compute, or medical devices, you need to be asking your vendors and partners how they validate chip security. And soon, you will need to ask how their AI tools are secured too.

PEARL is an incredible step forward. But it also shows us the limits of the current moment. AI can help us see deeper into complex systems. But even when it sees 97 percent of the truth, it only takes one blind spot to get burned.

More to come...

Gino Ferrand, Founder @ Tecla

Keep Reading

No posts found