“It wrote the test, passed the test, and deployed the fix. But I had no idea why it did any of it.”
Gino Ferrand, writing today from Santa Fe, New Mexico 🌞
We have moved past the “autocomplete for code” era. Everyone can see that.
Agentic AI systems, tools that not only write code but also initiate actions, trigger workflows, and push updates, are starting to show up in real engineering stacks. GitHub’s Copilot Workspace, OpenAI’s Codex, and Claude’s repo-wide agents are the early faces of this shift. They do not just complete functions. They create pull requests, close tickets, and schedule deployments.
It is a massive leap in capability. But there is a catch. No one fully trusts them.
Trust is the bottleneck
A recent research discussion framed this perfectly. Trust in agentic AI is not one thing. It is a blend of technical and human concerns. Correctness, security, and maintainability matter, but so do explainability, team alignment, and cognitive comfort.
It is not enough for the agent to produce a passing test. Developers want to understand why it made the change, how it decided which file to touch, and whether its assumptions match the team’s expectations. That is trust, not just functionally, but philosophically.
And right now, the trust gap is wide.
The illusion of confidence
Agentic systems are extremely convincing. They operate in full sentences. They summarize context. They execute multi-step tasks with no delay. But under the hood, they are still probabilistic engines. One wrong assumption early in the chain, and the whole solution veers into nonsense. Worse, it looks right while doing it.
Security researchers have already flagged this. As we saw in “The First Security Crisis of the AI Coding Era”, AI-generated code is often flawed but hard to detect at scale. Now imagine that same code being self-deployed by an autonomous agent. The margin for error shrinks to zero.
Build faster with LATAM engineers who get AI
Hiring great developers used to be the bottleneck. Now it's about finding people who can move fast, collaborate across tools, and co-build with AI.
TECLA helps U.S. tech teams hire senior-level, English-proficient engineers from across Latin America at up to 60% less cost, and in your time zone.
Want to see how fast you could scale?
Explainability is not a luxury
The deeper issue is explainability. Most agentic AI systems cannot yet provide step-by-step reasoning or a reliable log of why they made specific decisions. This matters, not just for audits or debugging, but for team buy-in.
Developers do not want to rubber-stamp suggestions they cannot validate. And managers will not deploy tools their teams do not trust. If the AI cannot show its work, it will remain sidelined, brilliant but isolated.
The future of AI engineering is social
What this research gets right is that trust in agentic AI is not purely technical. It is social. It is about shared language, shared context, and shared values.
Just like a new engineer has to learn the team’s expectations, so too must these agents. They need onboarding. They need feedback loops. They need to observe and adapt, not just execute and exit.
Some companies are already experimenting with this. At one mid-sized SaaS company, an agent was embedded into the daily standup, not to report, but to observe patterns and ask clarifying questions about ticket scope. Over time, it began drafting tickets that more closely matched how the team framed work. That was the tipping point. Not speed. Not power. Just relevance.
So where does this leave us
Agentic AI is the next wave. That part is clear. But if we want it to land, we need to reframe success. It is not just about performance. It is about relationships.
The best tools will not just push code. They will earn the right to.
More to come…
Recommended Reads
✔️ Trust, but verify, the actions of your AI agents (Forbes)
✔️ As AI agents gain autonomy, trust becomes the new currency (Business Insider)
✔️ Evaluating Trustworthiness of Explanations in Agentic AI Systems (Intel Labs)
– Gino Ferrand, Founder @ Tecla