Using the Gemini API, you can build freeform conversations across
multiple turns. The Vertex AI in Firebase SDK simplifies the process by managing
the state of the conversation, so unlike with generateContentStream()
or
generateContent()
, you don't have to store the conversation history yourself.
Optionally experiment with an alternative "Google AI" version of the Gemini API
Get free-of-charge access (within limits and where available) using Google AI Studio and Google AI client SDKs. These SDKs should be used for prototyping only in mobile and web apps.After you're familiar with how a Gemini API works, migrate to our Vertex AI in Firebase SDKs (this documentation), which have many additional features important for mobile and web apps, like protecting the API from abuse using Firebase App Check and support for large media files in requests.
Optionally call the Vertex AI Gemini API server-side (like with Python, Node.js, or Go)
Use the server-side Vertex AI SDKs, Firebase Genkit, or Firebase Extensions for the Gemini API.
Before you begin
If you haven't already, complete the getting started guide for the Vertex AI in Firebase SDKs. Make sure that you've done all of the following:
Set up a new or existing Firebase project, including using the Blaze pricing plan and enabling the required APIs.
Connect your app to Firebase, including registering your app and adding your Firebase config to your app.
Add the SDK and initialize the Vertex AI service and the generative model in your app.
After you've connected your app to Firebase, added the SDK, and initialized the Vertex AI service and the generative model, you're ready to call the Gemini API.
Send a chat prompt request
To build a multi-turn conversation (like chat), start off by initializing the
chat by calling startChat()
. Then use
sendMessageStream()
(or sendMessage()
) to send a new user message, which
will also append the message and the response to the chat history.
There are two possible options for role
associated with the content in a
conversation:
user
: the role which provides the prompts. This value is the default for calls tosendMessageStream()
(orsendMessage()
), and the function throws an exception if a different role is passed.model
: the role which provides the responses. This role can be used when callingstartChat()
with existinghistory
.
Choose whether you want to stream the response (sendMessageStream
) or wait
for the response until the entire result is generated (sendMessage
).
Streaming
You can achieve faster interactions by not waiting for the entire result from the model generation, and instead use streaming to handle partial results.
Without streaming
Alternatively, you can wait for the entire result instead of streaming; the result is only returned after the model completes the entire generation process.
Learn how to choose a Gemini model and optionally a location appropriate for your use case and app.
What else can you do?
- Learn how to count tokens before sending long prompts to the model.
- Set up Cloud Storage for Firebase so that you can include large files in your multimodal requests using Cloud Storage URLs. Files can include images, PDFs, video, and audio.
- Start thinking about preparing for production, including setting up Firebase App Check to protect the Gemini API from abuse by unauthorized clients.
Try out other capabilities of the Gemini API
- Generate text from text-only prompts.
- Generate text from multimodal prompts (including text, images, PDFs, video, and audio).
- Generate structured output (like JSON) from both text and multimodal prompts.
- Use function calling to connect generative models to external systems and information.
Learn how to control content generation
- Understand prompt design, including best practices, strategies, and example prompts.
- Configure model parameters like temperature and maximum output tokens.
- Use safety settings to adjust the likelihood of getting responses that may be considered harmful.
Learn more about the Gemini models
Learn about the models available for various use cases and their quotas and pricing.Give feedback about your experience with Vertex AI in Firebase