“If we don’t give employees tools they trust, they’ll bring their own. And they already have.”
Gino Ferrand, writing today from Santa Fe, New Mexico 🌞
You thought your AI policy was clear.
Use approved tools. Don’t paste sensitive data into random bots. Wait for IT to review new vendors. Follow the guidelines.
But your engineers didn’t wait. Neither did your marketers. Nor the product team. Or finance. Or legal.
According to Microsoft’s latest Work Trend Index, 71 percent of employees are already using AI tools at work without approval. It is not a small problem. It is not a rogue intern pasting code into ChatGPT. It is everyone, everywhere, using whatever works.
The compliance playbook just got torched. And security is the last to find out.
Shadow AI is the new default
It used to be called shadow IT. That moment when someone spun up a Notion wiki or a personal AWS instance to move faster than the official system allowed.
Now, it’s shadow AI. A quiet epidemic of tools your company never evaluated, never approved, and never trained anyone to use safely. Claude for summarizing meeting notes. Midjourney for mockups. GPT-4 for drafting client emails. Poe, Perplexity, HuggingChat, you name it.
These tools are fast, useful, and everywhere. Microsoft’s data shows that usage spans roles, seniority, and departments. It’s not just junior staff experimenting. It’s directors and VPs integrating AI into their workflows on a daily basis.
Why? Because sanctioned tools are too slow to roll out, too restrictive to be helpful, or too neutered to deliver value. So people bypass them.
This is not just a productivity issue. It’s a security breach in slow motion.
Every time someone drops client data into a consumer chatbot, you lose visibility. Every time someone uploads source code to an AI explainer tool, you lose control. And every time an employee uses AI to draft legal content, file taxes, or submit policy recommendations, you’re exposed to legal and compliance risk.
Worse, most orgs don’t know it’s happening until something goes wrong. Until a dependency is injected. Until a model is fine-tuned on internal data. Until a client asks why their IP was referenced in a public repo.
There is no audit trail. No access controls. No DLP.
Just a growing shadow network of tools, prompts, data, and decisions happening outside the approved stack.
Engineers are leading the charge. And that’s not always good news.
Developers love speed. And right now, unregulated AI tools offer that in spades.
We’ve seen engineers use AI to write tests, scaffold features, debug logs, even design APIs. But when that happens outside the guardrails of internal platforms, it’s not just a process improvement. It’s a potential disaster.
Imagine an AI-assisted PR that pulls in insecure code, or a junior engineer unknowingly pasting secrets into a chatbot for help. These aren’t edge cases anymore. They are everyday risks.
Shadow AI in engineering orgs is not theoretical. It is happening today. At scale. Across companies large and small.
A faster, safer path forward? Build with nearshore engineers who already know AI.
The problem isn’t just tool sprawl. It’s that most orgs don’t have engineers who’ve been trained or even encouraged, to build safely with AI in the loop.
That’s where AI-enabled nearshore teams come in. At Tecla, we help U.S. companies hire senior-level engineers across Latin America who already know how to prompt, code, and collaborate with AI... in your time zone, and at up to 60% less cost.
No delays. No lost context. Just real-time velocity with people who get it.
The fix isn't a lockdown. It’s leadership.
Some CISOs are responding with firewalls and bans. But if history is any guide, that just pushes usage deeper underground. The real solution is enablement with oversight.
Give your teams approved tools that are actually useful. Educate them on what’s safe to share and what’s not. Build lightweight governance into their workflows. And create reporting that maps actual usage, not just policy compliance.
If 71 percent of your workforce is already using unapproved AI, your job is not to pretend it’s a rogue problem. Your job is to meet reality where it is and build new norms from there.
Shadow AI is not going away. But it can be pulled into the light. If you move fast enough.
More to come...
– Gino Ferrand, Founder @ Tecla


