Google's VP of Vertex AI: Why Agentic AI Is Still Mostly Demos

Three Frontiers of AI Model Capability
In a wide-ranging interview with TechCrunch, Michael Gerstenhaber, VP of Product for Google Cloud's Vertex AI, laid out the clearest framework yet for understanding where enterprise AI is actually headed — and why most companies are still in the demo phase.
Gerstenhaber, who spent 1.5 years at Anthropic before joining Google, described three distinct frontiers that define where AI model development is being pushed:
- Raw intelligence — the ability to produce the highest-quality output regardless of time. Think Gemini Pro tackling complex coding tasks where latency is irrelevant.
- Latency — the most intelligent response within a tight time budget. Customer support agents that need to respond in seconds, not minutes.
- Cost and scalability — deploying useful AI at infinite scale affordably. Reddit or Meta moderating billions of pieces of content across the internet.
The insight here is simple but widely ignored: most enterprises are conflating these three. A model optimized for raw intelligence is the wrong tool for real-time customer support. A latency-focused model is overkill for overnight batch analysis. Pick the wrong frontier and your AI investment delivers a fraction of its potential.
Why Agentic AI Is Still Mostly Demos
The question Gerstenhaber gets asked most often: if agentic AI is so promising, why aren't enterprises deploying it in production at scale?
His answer is direct: it's not the models that are holding things back — it's the organizational infrastructure.
Most companies don't yet have the patterns for auditing AI agent decisions or enforcing fine-grained authorization at every step of an automated workflow. When an AI agent takes 40 sequential actions and something goes wrong, who's responsible? How do you audit step 23? These are not solved problems.
Gerstenhaber notes that software engineering is the notable exception — AI adoption there is genuinely ahead of every other industry. The reason: developers already have the infrastructure. The write → test → deploy pipeline exists. Human-in-the-loop review is built into the process. The patterns for agentic AI already map onto how software is built.
For every other industry, that pipeline still needs to be constructed from scratch. Production AI deployments are always a trailing indicator — the organizational patterns have to be built before the technology can be trusted at scale.
Google's Vertical Integration Advantage
One part of the interview that stands out: Google's claim to be the only company that is truly vertically integrated across every layer of the AI stack.
That stack runs from physical infrastructure (data centers, electricity supply) all the way up to consumer and enterprise interfaces, touching:
- Custom AI chips (TPUs)
- Frontier models (Gemini family)
- Inference infrastructure
- Agentic frameworks and memory APIs
- Agent Engine
- Vertex AI enterprise platform
- Gemini consumer and Workspace interfaces
The argument is that this integration allows Google to optimize across the entire chain in ways that pure-play model companies or cloud providers building on top of third-party models cannot match.
The Bottom Line
Gerstenhaber's framework is a useful reality check for any enterprise planning AI investments in 2026. The technology is not the bottleneck for most use cases. The missing piece is organizational infrastructure — the auditing, authorization, and workflow patterns that make autonomous AI agents trustworthy enough for production.
Until those patterns exist, agentic AI will remain impressive in demos and limited in deployment. The companies that figure out the infrastructure layer first will have a significant head start — regardless of which model they're using.