"Management is doing things right; leadership is doing the right things."
Gino Ferrand, writing today from Seattle, WA 🐋
CTOs are drawing up new org charts. Engineering managers are rewriting their own job descriptions. Why?
Because the newest member of the team isn’t human.
AI isn’t just helping developers write code. It’s infiltrating workflows, influencing architecture, reviewing pull requests, and recommending dependencies. In 2025, engineering leaders aren’t just overseeing engineers anymore...they’re overseeing engineers plus AI agents.
And that changes everything.
At the executive level, the mandate is clear: adopt AI, but do it responsibly. According to Gartner, 70% of all software leadership roles will soon require oversight of generative AI. That means:
Vetting which models are allowed inside your walls
Setting up AI governance policies that include legal, security, and compliance stakeholders
Auditing what the AI generates (and ensuring it doesn’t leak secrets or licensed code)
Some orgs are already asking managers to prove why AI can’t do the job before opening a new headcount request. Efficiency is the new currency. If AI can write the boilerplate and build the test scaffolding, managers are expected to redeploy human engineers toward harder problems.
But the promise of AI also comes with risk. Forrester predicts high-profile breaches blamed on AI code are coming. Executives can’t just deploy Copilot and walk away. They need tooling, process, and oversight. In this era, a good CTO isn’t just picking frameworks...they’re picking the agents that will write them.
Engineering managers and tech leads are adapting on the fly. They’re starting to:
Treat AI like a junior engineer that never sleeps
Implement “trust but verify” policies on AI code contributions
Use static analysis and security scans as guardrails for hallucinated code
AI is great at getting to 70%. But that last 30%? That’s where teams win or lose. The best engineering managers today are coaching devs on the finish line work: system integration, edge cases, performance tuning, and documentation.
They’re also retraining juniors to use AI as a tutor, not a crutch. In a world where Copilot handles the syntax, critical thinking becomes the differentiator. The developers who can reason, explain, and debug are the ones who rise fastest.
The future of software engineering isn’t just AI... it’s AI-powered teams. By combining AI-driven productivity with top-tier remote nearshore engineers, companies unlock exponential efficiency at a 40-60% lower cost, all while collaborating in the same time zone.
✅ AI supercharges senior engineers—faster development, fewer hires needed
✅ Nearshore talent = same time zones—real-time collaboration, no delays
✅ Elite engineering at significant savings—scale smarter, faster, better
Engineering leadership in 2025 now includes:
Prompt engineering: The new interface isn’t a shell or GUI...it’s English.
AI output auditing: Reviewing the work of an agent like you would a new hire.
Data & compliance fluency: Knowing when a prompt might leak sensitive info.
Change management: Helping teams emotionally and technically adapt to AI.
Leaders are learning how to foster a culture of experimentation while drawing lines around compliance and safety. They’re organizing lunch-and-learns around ChatGPT. They’re building dashboards to track AI usage and quality. And they’re defining metrics that measure outcomes, not just velocity.
In AI-integrated orgs, development looks different:
CI pipelines include extra scans for AI-generated code
Pull requests label which chunks came from AI
Tooling flags hallucinated libraries before they reach production
Leaders push for measurable ROI from AI usage
You’ll even see engineering leads auditing how AI is used in job descriptions, hiring processes, and onboarding. It’s not enough to use AI. Teams are expected to use it well.
This AI wave isn’t reducing the need for engineering leadership. It’s amplifying it. As AI accelerates how fast we build, the human job becomes keeping things aligned:
Aligned with business value
Aligned with long-term quality
Aligned with what humans still do best
AI might write the code, but it won’t decide why something should be built. It won’t push back on a bad feature. It won’t mentor a struggling team member.
That’s still on us.
More to come…
– Gino Ferrand, Founder @ TECLA