At 100 miles per hour, there is no room for an AI “hallucination.”
When a race car approaches a high-speed corner at Thunderhill Raceway in Willows, CA, the difference between a perfect line and a dangerous skid is measured in milliseconds. Traditionally, performance telemetry has relied on static code that tells you what happened after the fact. A small team of Google Developer Experts (GDEs) wanted to see if AI could move into the driver’s ear in real-time, transforming raw data into trustable, split-second guidance.
Agent-Led Development with the Unified Journey
The most remarkable part of this test wasn’t just the result, but the speed of development. Leveraging Antigravity (AGY), Google’s new framework for orchestrating stateful agentic systems, the team utilized natural-language-driven orchestration to compress a three-month development cycle into just two weeks. The AGY Agent Manager accelerated the workflow by handling high-scale cold-path data processing and boilerplate physics logic, allowing the GDEs to focus on high-level system behavior through vibe coding.
This project served as a stress test for Google’s Unified Developer Journey. The GDEs began with rapid prototyping in the Google AI Studio before using this blueprint to bridge the transition to Vertex AI—the “pro-tier” path for production-grade systems. Instead of writing thousands of lines of boilerplate physics logic, the GDEs described desired agentic behaviors in natural language, anchoring the architecture for high-scale processing and real-time state management via Firebase.
The “Split-Brain” Architecture
The foundation of the framework is a “Split-Brain” architecture designed to separate “reflexes” from “strategy”. To manage this complex deployment, the GDEs operated in specialized strike teams:
- The Intelligence Team: Jigyasa Grover and Vikram Tiwari implemented the multi-tier system. For split-second reflexes, Gemini Nano runs at the edge, while higher-level reasoning and strategic lap analysis are handled by Gemini 3.0, while Margaret Maynard-Reid led the daily standups.
- The Edge Team: Sebastian Gomez spearheaded the use of Nano in Chrome via the Web API to achieve ~15ms response times, while Austin Bennett managed the complex hardware configuration required to keep the “Data Crucible” node alive at speed.
- The Perception Team: Hemanth HM and Vikram Tiwari brought the track to life at the application layer. They utilized Maps MCP to help the system “see” the track layout while rendering real-time 3D telemetry at 60FPS, allowing for “ghost analysis” of the driver’s line compared to the AI’s physics-based recommendations.
This agentic routing was managed entirely via Antigravity, which served as the orchestration layer between Gemini Nano’s edge reflexes (~15ms response times) and the strategic reasoning of Gemini 3.0. By automating the hand-offs between these models, the framework maintained real-time state management even at speeds exceeding 100 mph.
Mathematically Verifiable Coaching
Trust is built on verification. Rabimba Karanjai implemented a Neuro-Symbolic Training method to ensure the AI’s advice was grounded in physics. By fine-tuning the models on a “Golden Lap” baseline using QLoRA, the system could mathematically verify its own coaching. If the AI tells a driver to “brake later,” it’s because the framework verified that advice against the laws of physics.
The team utilized a Draft -> Verify -> Refine agentic loop for real-time triage. When encountering data friction in the pit lane, the AGY Agent Manager proposed code fixes, utilized automated browser verification to test the logic against telemetry baselines, and pushed validated updates to the car’s ‘Data Crucible’ between laps. This self-correcting workflow ensured that the coaching advice—such as ‘brake 20 ft later’—was always grounded in physics and pre-verified for safety.
The “Gemini Squad”: Grounding in Pedagogy
To bridge the gap between data and human understanding, Lynn Langit introduced persona-based routing grounded in “Human Pedagogy.” The framework uses a “Gemini Squad” of agents—like AJ the Crew Chief and Ross the Telemetry Engineer—to deliver context-aware guidance. By injecting expert racing logic directly into the system prompts, the GDEs ensured the AI remained a professional coach, even enforcing a “refractory period” to manage the driver’s cognitive load.
Ground Truths: The Next Field Test
The Thunderhill field test proved that the “AI Trust Gap” can be closed using a split-brain architecture and Google’s Unified Developer Journey. After reviewing the system’s output, Thunderhill CEO Matt Busby remarked: “You guys have done more in a day than the entire industry has done in 40 years. This system makes racing data repeatable and accurate by marrying gut feeling with objective logic—it’s light years ahead of what exists in the market today.”
Ready to build?
As this group of GDEs demonstrated, the leap from experimental prototypes to production systems is complex, but navigable. If you’re ready to move beyond vibe coding and start building on the ‘pro-tier’ of Vertex AI, get started with our ADK Crash Course and build sophisticated, autonomous systems that can reason, plan, and use tools to accomplish complex tasks.
Deep Dives from our GDEs

Photo captured by @gotbluemilk















