When we first introduced Gemini Embedding 2, we invited developers and enterprises to enable deeper intelligence for their projects using natively multimodal embeddings. During its preview phase, we saw users create remarkable prototypes, from advanced e-commerce discovery engines to efficient video analysis tools. These projects demonstrated the need for systems that can search and reason across text, image, video and audio data, which previously required complex, fragmented pipelines.
Now, we’re offering the stability and optimizations required to move these multimodal projects into production with the general availability of Gemini Embedding 2 via the Gemini API and Vertex AI.
As a core technology powering many Google products, we are excited to share these research breakthroughs with the developer community.













