About Gemini in GitHub Copilot
Gemini models are large language models (LLMs) that you can use as an alternative to the default model used by Copilot Chat. Gemini models are responsive LLMs that can empower you to build apps faster and more easily, so you can focus on great experiences for your users. Para obter informações sobre os recursos do Gemini 2.0 Flash, consulte o blog da Google para desenvolvedores e a documentação do Google Cloud. Para obter detalhes sobre a política de tratamento de dados da Google, consulte IA Generativa e governança de dados no site da Google.
Gemini is currently available in:
- Copilot Chat in Visual Studio Code
- Immersive mode in Copilot Chat in GitHub
GitHub Copilot uses Gemini 2.0 Flash and Gemini 2.5 Pro hosted on Google Cloud Platform (GCP). When using Gemini models, prompts and metadata are sent to GCP, which makes the following data commitment: Gemini doesn't use your prompts, or its responses, as data to train its models.
When using Gemini models, input prompts and output completions continue to run through GitHub Copilot's content filters for public code matching, when applied, along with those for harmful, offensive, or off-topic content.
Configuring access
You must enable access to Gemini 2.0 Flash and Gemini 2.5 Pro before you can use the model.
Setup for organization and enterprise use
As an enterprise or organization owner, you can enable or disable both Gemini 2.0 Flash and Gemini 2.5 Pro for everyone who has been assigned a Copilot Enterprise or Copilot Business seat through your enterprise or organization. See Gerenciar políticas do Copilot na sua organização and Gerenciando políticas e recursos do Copilot em sua empresa.
Using Gemini
For details of how to change the model that Copilot Chat uses, see Alterando o modelo de IA do Copilot Chat.