Generative AI on Vertex AI lets you build production-ready agents and applications that are powered by state-of-the-art generative AI models hosted on Google's advanced, global infrastructure.
Build, deploy, and connect agents
Build agents with an open approach and deploy them with enterprise-grade controls. Connect agents across your enterprise ecosystem.
Enterprise ready
Deploy your generative AI agents and applications at scale with enterprise-grade security, data residency and privacy, access transparency, and low latency.
State-of-the-art features
Expand the capabilities of your applications by using the 2,000,000-token context window that Gemini supports, and the built-in multimodality and thinking capabilities from Gemini 2.5 models.
Open and flexible platform
Vertex AI Model Garden provides a library of over 200 enterprise-ready models and Vertex AI Model Builder helps you test, customize, deploy, and monitor Google proprietary and select third-party models, including Anthropic's Claude 3.7 Sonnet, Meta's Llama 4, Mistral's AI Mixtral 8x7B, and AI21 Labs's Jamba 1.5.
Core capabilities
-
Agent Builder
A suite of features for building and deploying AI agents.
-
Text generation
Send chat prompts to a Gemini model and receive streaming or non-streaming responses.
-
Multimodal processing
Process multiple types of input media at the same time, such as image, video, audio, and documents.
-
Embeddings generation
Generate embeddings to perform tasks such as search, classification, clustering, and outlier detection.
-
Model tuning
Adapt models to perform specific tasks with greater precision and accuracy.
-
Function calling
Connect models to external APIs to extend the model's capabilities.
-
Grounding
Connect models to external data sources to reduce hallucinations in responses.
-
-
Generative AI Evaluation Service
Evaluate any generative model or application and benchmark the evaluation results.
Vertex AI and Google AI differences
Gemini API in Vertex AI and Google AI both let you incorporate the capabilities of Gemini models into your applications. The platform that's right for you depends on your goals as detailed in the following table.
API | Designed for | Features |
---|---|---|
Vertex AI Gemini API |
|
|
Google AI Gemini API |
|
|
Build using Vertex AI SDKs
Client libraries make it easier to access Google Cloud APIs from a supported language. Although you can use Google Cloud APIs directly by making requests to the server using the REST API, client libraries provide simplifications that significantly reduce the amount of code you need to write.
Vertex AI provides Vertex Generative AI SDKs for these languages: Python, Node.js, Java, and Go.
To make requests directly from your web or mobile app, you can use the Vertex AI in Firebase SDKs (available for Swift, Kotlin/Java, JavaScript, and Flutter). These SDKs offer ease-of-use and critical security features for implementations in web and mobile apps.
Get started
Try one of these quickstarts to get started with Generative AI on Vertex AI.
-
Generate text using the Gemini API in Vertex AI
Use the SDK to send requests to the Gemini API in Vertex AI.
-
Send prompts to Gemini using the Vertex AI Studio Prompt Gallery
Test prompts with no setup required.
-
Generate an image and verify its watermark using Imagen
Create a watermarked image using Imagen on Vertex AI.
More ways to get started
Here are some notebooks, tutorials, and other examples to help you get started. Vertex AI offers Google Cloud console tutorials and Jupyter notebook tutorials that use the Vertex AI SDK for Python. You can open a notebook tutorial in Colab or download the notebook to your preferred environment.
Get started with Gemini using notebooks
|
The Gemini model is a groundbreaking multimodal language model developed by Google AI, capable of extracting meaningful insights from a diverse array of data formats, including images, and video. This notebook explores various use cases with multimodal prompts.
|
Best practices for prompt design
![]() |
Learn how to design prompts to improve the quality of your responses from the model. This tutorial covers the essentials of prompt engineering, including some best practices.
|