Securing the AI Pair Programmer

The safest tools, smartest practices, and how to train your team to code with AI without creating risk.

"An ounce of prevention is worth a pound of cure."

Benjamin Franklin

Gino Ferrand, writing today from Santa Fe, New Mexico 🏜️

In the previous issue, we broke down the failure patterns: hallucinated packages, hardcoded secrets, outdated crypto, insecure input handling. AI isn’t just writing code...it’s quietly writing vulnerabilities.

Now, let’s talk about what engineering leaders can actually do about it.

AI-Enabled Nearshore Engineers: The Ultimate Competitive Edge

The future of software engineering isn’t just AI... it’s AI-powered teams. By combining AI-driven productivity with top-tier remote nearshore engineers, companies unlock exponential efficiency at a 40-60% lower cost, all while collaborating in the same time zone.

 AI supercharges senior engineers—faster development, fewer hires needed
 Nearshore talent = same time zones—real-time collaboration, no delays
 Elite engineering at significant savings—scale smarter, faster, better

Which AI Coding Tools Are Safest? (Relatively Speaking)

None are bulletproof. But some are less leaky than others.

  • Amazon CodeWhisperer stands out for its security-by-default approach. It includes static analysis, secrets detection, and reference checking. It flagged vulnerable code more often...and leaked fewer secrets...than any other tool tested in 2024–2025.

  • GitHub Copilot has improved. Its new vulnerability filter blocks some obvious flaws, and its CodeQL integration helps catch others. But studies show it still produces insecure suggestions ~27–35% of the time and has leaked thousands of credentials during testing.

  • Claude and GPT-4 are powerful and flexible but require you to steer the conversation. Without prompting, they may happily write insecure code. No automatic filtering or scanning.

  • Cursor, the fast-growing AI IDE, prioritizes speed...but security? Not so much. It executes insecure suggestions with minimal guardrails. If you don’t prompt for safety, don’t expect it.

  • Open-source models like Code Llama and StarCoder offer privacy, but zero built-in protections. Think of them like raw interns: powerful, but you better triple-check their work.

What Mitigation Strategies Are Working in 2025?

Security leaders aren’t banning AI. They’re governing it.

  • Usage Policies: Approved tools only. No pasting secrets into prompts. No AI for sensitive code unless wrapped with safeguards.

  • Code Review Augmentation: AI-written code is flagged and reviewed like any junior dev’s commit. Some teams annotate diffs or require prompts to be disclosed in PRs.

  • “Shift Left” Security: Every commit gets scanned by linters, static analyzers, and secret detection tools (e.g. CodeQL, Snyk, truffleHog). No AI-written code merges without passing tests.

  • SBOM + Dependency Controls: New packages get checked against an allowlist. If an AI suggests a phantom or risky library, it gets blocked.

  • Secrets Management: Developers are trained to use env vars and vaults. Pre-commit hooks and CI tools catch anything Copilot accidentally leaks.

How Are Teams Training Developers to Use AI Securely?

The best orgs are investing in real training...not slide decks.

  • Workshops on AI Failure Modes: Developers walk through insecure AI suggestions and learn how to spot flaws (SQLi, XSS, outdated crypto, etc.).

  • Prompt Engineering Guidance: Devs are taught to get secure results by prompting carefully (e.g. “use parameterized queries” or “sanitize all input”).

  • OWASP Top 10 for LLMs: Teams use this new framework to teach AI-specific risks like prompt injection, hallucination, and training data leakage.

  • Interactive Labs: Developers use tools like SecureFlag or Secure Code Warrior to simulate real AI pair programming and review its code.

  • Internal Champions + Playbooks: AppSec or “AI tool champions” document common bugs seen from AI suggestions and coach the rest of the team.

The goal? Make developers comfortable working with AI...without blindly trusting it.

2025 is the year of AI pair programming at scale. But no tool will save you from bad process. Engineering leaders need to lead here.

Set policies. Automate reviews. Train your devs.

Speed without security is a breach waiting to happen.

More to come…

Recommended Reads

Gino Ferrand, Founder @ TECLA