This guide shows you how to get started making calls to the Gemini API directly from your app using the Firebase AI Logic client SDKs for your chosen platform.
You can also use this guide to get started with accessing Imagen models using the Firebase AI Logic SDKs.
Prerequisites
Swift
This guide assumes that you're familiar with using Xcode to develop apps for Apple platforms (like iOS).
Make sure that your development environment and Apple platforms app meet these requirements:
- Xcode 16.2 or higher
- Your app targets iOS 15 or higher, or macOS 12 or higher
(Optional) Check out the sample app.
You can try out the SDK quickly, see a complete implementation of various use cases, or use the sample app if don't have your own Apple platforms app. To use the sample app, you'll need to connect it to a Firebase project.
Kotlin
This guide assumes that you're familiar with using Android Studio to develop apps for Android.
Make sure that your development environment and Android app meet these requirements:
- Android Studio (latest version)
- Your app targets API level 21 or higher
(Optional) Check out the sample app.
You can try out the SDK quickly, see a complete implementation of various use cases, or use the sample app if don't have your own Android app. To use the sample app, you'll need to connect it to a Firebase project.
Java
This guide assumes that you're familiar with using Android Studio to develop apps for Android.
Make sure that your development environment and Android app meet these requirements:
- Android Studio (latest version)
- Your app targets API level 21 or higher
(Optional) Check out the sample app.
You can try out the SDK quickly, see a complete implementation of various use cases, or use the sample app if don't have your own Android app. To use the sample app, you'll need to connect it to a Firebase project.
Web
This guide assumes that you're familiar with using JavaScript to develop web apps. This guide is framework-independent.
Make sure that your development environment and web app meet these requirements:
- (Optional) Node.js
- Modern web browser
(Optional) Check out the sample app.
You can try out the SDK quickly, see a complete implementation of various use cases, or use the sample app if don't have your own web app. To use the sample app, you'll need to connect it to a Firebase project.
Dart
This guide assumes that you're familiar with developing apps with Flutter.
Make sure that your development environment and Flutter app meet these requirements:
- Dart 3.2.0+
(Optional) Check out the sample app.
You can try out the SDK quickly, see a complete implementation of various use cases, or use the sample app if don't have your own Flutter app. To use the sample app, you'll need to connect it to a Firebase project.
Unity
This guide assumes that you're familiar with developing games with Unity.
Make sure that your development environment and Unity game meet these requirements:
- Unity Editor 2021 LTS or newer
(Optional) Check out the sample app.
You can try out the SDK quickly, see a complete implementation of various use cases, or use the sample app if don't have your own Unity game. To use the sample app, you'll need to connect it to a Firebase project.
Step 1: Set up a Firebase project and connect your app
Sign into the Firebase console, and then select your Firebase project.
If you don't already have a Firebase project, click Create project, and then use either of the following options:
Option 1: Create a wholly new Firebase project (and its underlying Google Cloud project automatically) by entering a new project name in the first step of the "Create project" workflow.
Option 2: "Add Firebase" to an existing Google Cloud project by selecting your Google Cloud project name from the drop-down menu in the first step of the "Create project" workflow.
Note that when prompted, you do not need to set up Google Analytics to use the Firebase AI Logic SDKs.
In the Firebase console, go to the Firebase AI Logic page.
Click Get started to launch a guided workflow that helps you set up the required APIs and resources for your project.
Select the "Gemini API" provider that you'd like to use with the Firebase AI Logic SDKs. You can always set up and use the other API provider later, if you'd like.
Gemini Developer API — billing optional (available on the no-cost Spark pricing plan)
The console will enable the required APIs and create a Gemini API key in your project. You can set up billing later if you want to upgrade your pricing plan.Vertex AI Gemini API — billing required (requires the pay-as-you-go Blaze pricing plan)
The console will help you set up billing and enable the required APIs in your project.
If prompted in the console's workflow, follow the on-screen instructions to register your app and connect it to Firebase.
Continue to the next step in this guide to add the SDK to your app.
Step 2: Add the SDK
With your Firebase project set up and your app connected to Firebase (see previous step), you can now add the Firebase AI Logic SDK to your app.
Swift
Use Swift Package Manager to install and manage Firebase dependencies.
The Firebase AI Logic library provides access to the APIs for interacting
with Gemini and Imagen models. The library is included
as part of the Firebase SDK for Apple platforms (firebase-ios-sdk
).
If you're already using Firebase, then make sure your Firebase package is v11.13.0 or later.
In Xcode, with your app project open, navigate to File > Add Package Dependencies.
When prompted, add the Firebase Apple platforms SDK repository:
https://github.com/firebase/firebase-ios-sdk
Select the latest SDK version.
Select the
FirebaseAI
library.
When finished, Xcode will automatically begin resolving and downloading your dependencies in the background.
Kotlin
The Firebase AI Logic SDK for Android (firebase-ai
) provides
access to the APIs for interacting with
Gemini and Imagen models.
In your module (app-level) Gradle file
(like <project>/<app-module>/build.gradle.kts
),
add the dependency for the Firebase AI Logic library for Android.
We recommend using the
Firebase Android BoM
to control library versioning.
dependencies { // ... other androidx dependencies // Import the BoM for the Firebase platform implementation(platform("com.google.firebase:firebase-bom:33.13.0")) // Add the dependency for the Firebase AI Logic library // When using the BoM, you don't specify versions in Firebase library dependencies implementation("com.google.firebase:firebase-ai") }
By using the Firebase Android BoM, your app will always use compatible versions of Firebase Android libraries.
If you choose not to use the Firebase BoM, you must specify each Firebase library version in its dependency line.
Note that if you use multiple Firebase libraries in your app, we strongly recommend using the BoM to manage library versions, which ensures that all versions are compatible.
dependencies { // Add the dependency for the Firebase AI Logic library // When NOT using the BoM, you must specify versions in Firebase library dependencies implementation("com.google.firebase:firebase-ai:16.0.0") }
Java
The Firebase AI Logic SDK for Android (firebase-ai
) provides
access to the APIs for interacting with
Gemini and Imagen models.
In your module (app-level) Gradle file
(like <project>/<app-module>/build.gradle.kts
),
add the dependency for the Firebase AI Logic library for Android.
We recommend using the
Firebase Android BoM
to control library versioning.
For Java, you need to add two additional libraries.
dependencies { // ... other androidx dependencies // Import the BoM for the Firebase platform implementation(platform("com.google.firebase:firebase-bom:33.13.0")) // Add the dependency for the Firebase AI Logic library // When using the BoM, you don't specify versions in Firebase library dependencies implementation("com.google.firebase:firebase-ai") // Required for one-shot operations (to use `ListenableFuture` from Guava Android) implementation("com.google.guava:guava:31.0.1-android") // Required for streaming operations (to use `Publisher` from Reactive Streams) implementation("org.reactivestreams:reactive-streams:1.0.4") }
By using the Firebase Android BoM, your app will always use compatible versions of Firebase Android libraries.
If you choose not to use the Firebase BoM, you must specify each Firebase library version in its dependency line.
Note that if you use multiple Firebase libraries in your app, we strongly recommend using the BoM to manage library versions, which ensures that all versions are compatible.
dependencies { // Add the dependency for the Firebase AI Logic library // When NOT using the BoM, you must specify versions in Firebase library dependencies implementation("com.google.firebase:firebase-ai:16.0.0") }
Web
The Firebase AI Logic library provides access to the APIs for interacting with Gemini and Imagen models. The library is included as part of the Firebase JavaScript SDK for Web.
Install the Firebase JS SDK for Web using npm:
npm install firebase
Initialize Firebase in your app:
import { initializeApp } from "firebase/app"; // TODO(developer) Replace the following with your app's Firebase configuration // See: https://firebase.google.com/docs/web/learn-more#config-object const firebaseConfig = { // ... }; // Initialize FirebaseApp const firebaseApp = initializeApp(firebaseConfig);
Dart
The Firebase AI Logic plugin for Flutter (firebase_ai
) provides
access to the APIs for interacting with
Gemini and Imagen models.
From your Flutter project directory, run the following command to install the core plugin and the Firebase AI Logic plugin:
flutter pub add firebase_core && flutter pub add firebase_ai
In your
lib/main.dart
file, import the Firebase core plugin, the Firebase AI Logic plugin, and the configuration file you generated earlier:import 'package:firebase_core/firebase_core.dart'; import 'package:firebase_ai/firebase_ai.dart'; import 'firebase_options.dart';
Also in your
lib/main.dart
file, initialize Firebase using theDefaultFirebaseOptions
object exported by the configuration file:await Firebase.initializeApp( options: DefaultFirebaseOptions.currentPlatform, );
Rebuild your Flutter application:
flutter run
Unity
Download the Firebase Unity SDK, then extract the SDK somewhere convenient.
The Firebase Unity SDK is not platform-specific.
In your open Unity project, navigate to Assets > Import Package > Custom Package.
From the extracted SDK, select the
FirebaseAI
package.In the Import Unity Package window, click Import.
Back in the Firebase console, in the setup workflow, click Next.
Step 3: Initialize the service and create a model instance
Click your Gemini API provider to view provider-specific content and code on this page. |
When using the Firebase AI Logic client SDKs with the Gemini Developer API, you do NOT add your Gemini API key into your app's codebase. Learn more.
Before sending a prompt to a Gemini model,
initialize the service for your chosen API provider and create a
GenerativeModel
instance.
Swift
import FirebaseAI
// Initialize the Gemini Developer API backend service
let ai = FirebaseAI.firebaseAI(backend: .googleAI())
// Create a `GenerativeModel` instance with a model that supports your use case
let model = ai.generativeModel(modelName: "gemini-2.0-flash")
Kotlin
// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a model that supports your use case
val model = Firebase.ai(backend = GenerativeBackend.googleAI())
.generativeModel("gemini-2.0-flash")
Java
// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a model that supports your use case
GenerativeModel ai = FirebaseAI.getInstance(GenerativeBackend.googleAI())
.generativeModel("gemini-2.0-flash");
// Use the GenerativeModelFutures Java compatibility layer which offers
// support for ListenableFuture and Publisher APIs
GenerativeModelFutures model = GenerativeModelFutures.from(ai);
Web
import { initializeApp } from "firebase/app";
import { getAI, getGenerativeModel, GoogleAIBackend } from "firebase/ai";
// TODO(developer) Replace the following with your app's Firebase configuration
// See: https://firebase.google.com/docs/web/learn-more#config-object
const firebaseConfig = {
// ...
};
// Initialize FirebaseApp
const firebaseApp = initializeApp(firebaseConfig);
// Initialize the Gemini Developer API backend service
const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });
// Create a `GenerativeModel` instance with a model that supports your use case
const model = getGenerativeModel(ai, { model: "gemini-2.0-flash" });
Dart
import 'package:firebase_ai/firebase_ai.dart';
import 'package:firebase_core/firebase_core.dart';
import 'firebase_options.dart';
// Initialize FirebaseApp
await Firebase.initializeApp(
options: DefaultFirebaseOptions.currentPlatform,
);
// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a model that supports your use case
final model =
FirebaseAI.googleAI().generativeModel(model: 'gemini-2.0-flash');
Unity
using Firebase;
using Firebase.AI;
// Initialize the Gemini Developer API backend service
var ai = FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI());
// Create a `GenerativeModel` instance with a model that supports your use case
var model = ai.GetGenerativeModel(modelName: "gemini-2.0-flash");
Note that depending on the capability you're using, you might not always
create a GenerativeModel
instance.
- To access an Imagen model,
create an
ImagenModel
instance.
Also, after you finish this getting started guide, learn how to choose a model for your use case and app.
Step 4: Send a prompt request to a model
You're now set up to send a prompt request to a Gemini model.
You can use generateContent()
to generate text from a prompt that contains
text:
Swift
import FirebaseAI
// Initialize the Gemini Developer API backend service
let ai = FirebaseAI.firebaseAI(backend: .googleAI())
// Create a `GenerativeModel` instance with a model that supports your use case
let model = ai.generativeModel(modelName: "gemini-2.0-flash")
// Provide a prompt that contains text
let prompt = "Write a story about a magic backpack."
// To generate text output, call generateContent with the text input
let response = try await model.generateContent(prompt)
print(response.text ?? "No text in response.")
Kotlin
For Kotlin, the methods in this SDK are suspend functions and need to be called from a Coroutine scope.
// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a model that supports your use case
val model = Firebase.ai(backend = GenerativeBackend.googleAI())
.generativeModel("gemini-2.0-flash")
// Provide a prompt that contains text
val prompt = "Write a story about a magic backpack."
// To generate text output, call generateContent with the text input
val response = generativeModel.generateContent(prompt)
print(response.text)
Java
For Java, the methods in this SDK return aListenableFuture
.
// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a model that supports your use case
GenerativeModel ai = FirebaseAI.getInstance(GenerativeBackend.googleAI())
.generativeModel("gemini-2.0-flash");
// Use the GenerativeModelFutures Java compatibility layer which offers
// support for ListenableFuture and Publisher APIs
GenerativeModelFutures model = GenerativeModelFutures.from(ai);
// Provide a prompt that contains text
Content prompt = new Content.Builder()
.addText("Write a story about a magic backpack.")
.build();
// To generate text output, call generateContent with the text input
ListenableFuture<GenerateContentResponse> response = model.generateContent(prompt);
Futures.addCallback(response, new FutureCallback<GenerateContentResponse>() {
@Override
public void onSuccess(GenerateContentResponse result) {
String resultText = result.getText();
System.out.println(resultText);
}
@Override
public void onFailure(Throwable t) {
t.printStackTrace();
}
}, executor);
Web
import { initializeApp } from "firebase/app";
import { getAI, getGenerativeModel, GoogleAIBackend } from "firebase/ai";
// TODO(developer) Replace the following with your app's Firebase configuration
// See: https://firebase.google.com/docs/web/learn-more#config-object
const firebaseConfig = {
// ...
};
// Initialize FirebaseApp
const firebaseApp = initializeApp(firebaseConfig);
// Initialize the Gemini Developer API backend service
const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });
// Create a `GenerativeModel` instance with a model that supports your use case
const model = getGenerativeModel(ai, { model: "gemini-2.0-flash" });
// Wrap in an async function so you can use await
async function run() {
// Provide a prompt that contains text
const prompt = "Write a story about a magic backpack."
// To generate text output, call generateContent with the text input
const result = await model.generateContent(prompt);
const response = result.response;
const text = response.text();
console.log(text);
}
run();
Dart
import 'package:firebase_ai/firebase_ai.dart';
import 'package:firebase_core/firebase_core.dart';
import 'firebase_options.dart';
// Initialize FirebaseApp
await Firebase.initializeApp(
options: DefaultFirebaseOptions.currentPlatform,
);
// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a model that supports your use case
final model =
FirebaseAI.googleAI().generativeModel(model: 'gemini-2.0-flash');
// Provide a prompt that contains text
final prompt = [Content.text('Write a story about a magic backpack.')];
// To generate text output, call generateContent with the text input
final response = await model.generateContent(prompt);
print(response.text);
Unity
using Firebase;
using Firebase.AI;
// Initialize the Gemini Developer API backend service
var ai = FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI());
// Create a `GenerativeModel` instance with a model that supports your use case
var model = ai.GetGenerativeModel(modelName: "gemini-2.0-flash");
// Provide a prompt that contains text
var prompt = "Write a story about a magic backpack.";
// To generate text output, call GenerateContentAsync with the text input
var response = await model.GenerateContentAsync(prompt);
UnityEngine.Debug.Log(response.Text ?? "No text in response.");
What else can you do?
Learn more about the supported models
Learn about the models available for various use cases and their quotas and pricing.
Try out other capabilities
- Learn more about generating text from text-only prompts, including how to stream the response.
- Generate text by prompting with various file types, like images, PDFs, video, and audio.
- Build multi-turn conversations (chat).
- Generate structured output (like JSON) from both text and multimodal prompts.
- Generate images from text prompts.
- Stream input and output (including audio) using the Gemini Live API.
- Use function calling to connect generative models to external systems and information.
Learn how to control content generation
- Understand prompt design, including best practices, strategies, and example prompts.
- Configure model parameters like temperature and maximum output tokens (for Gemini) or aspect ratio and person generation (for Imagen).
- Use safety settings to adjust the likelihood of getting responses that may be considered harmful.
Give feedback about your experience with Firebase AI Logic