Build hybrid experiences with on-device and cloud-hosted models


Build AI-powered apps and features with hybrid inference using Firebase AI Logic. Hybrid inference enables running inference using on-device models when available and seamlessly falling back to cloud-hosted models otherwise.

With this release, hybrid inference is available using the Firebase AI Logic client SDK for Web with support for on-device inference for Chrome on Desktop.

Jump to the code examples

Recommended use cases and supported capabilities

Recommended use cases:

  • Using an on-device model for inference offers:

    • Enhanced privacy
    • Local context
    • Inference at no-cost
    • Offline functionality
  • Using hybrid functionality offers:

    • Reach 100% of your audience, regardless of on-device model availability

Supported capabilities and features for on-device inference:

  • Single-turn content generation, streaming and non-streaming
  • Generating text from text-only input
  • Generating text from text-and-image input, specifically input image types of JPEG and PNG
  • Generating structured output, including JSON and enums

Get started

This guide shows you how to get started using the Firebase AI Logic SDK for Web to perform hybrid inference.

Inference using an on-device model uses the Prompt API from Chrome; whereas inference using a cloud-hosted model uses your chosen Gemini API provider (either the Gemini Developer API or the Vertex AI Gemini API).

Step 1: Set up Chrome and the Prompt API for on-device inference

  1. Download the latest Chrome Dev build.

    On-device inference is available from Chrome v138 and higher.

  2. Enable the Prompt API for your Chrome instance by setting the following flags:

    • chrome://flags/#optimization-guide-on-device-model: Set to Enabled.
    • chrome://flags/#prompt-api-for-gemini-nano: Set to Enabled.

    Learn more about using APIs on localhost in the Chrome documentation. Optionally, join Chrome's Early Preview Program (EPP) to provide feedback.

  3. Enable the on-device multimodal model by setting the following flag:

    • chrome://flags/#prompt-api-for-gemini-nano-multimodal-input: Set to Enabled.
  4. Verify the API locally:

    1. Restart Chrome.

    2. Open Developer Tools > Console.

    3. Run the following:

      await LanguageModel.availability();
      
    4. Make sure that the output is available, downloading, or downloadable. .

    5. If the output is downloadable, you can start the model download by running await LanguageModel.create();. Otherwise, the first request for on-device inference will start a model download in the background, which could take several minutes.

Step 2: Set up a Firebase project and connect your app to Firebase

  1. Sign into the Firebase console, and then select your Firebase project.

  2. In the Firebase console, go to the Firebase AI Logic page.

  3. Click Get started to launch a guided workflow that helps you set up the required APIs and resources for your project.

  4. Select the "Gemini API" provider that you'd like to use with the Firebase AI Logic SDKs. You can always set up and use the other API provider later, if you'd like.

    • Gemini Developer APIbilling optional (available on the no-cost Spark pricing plan)
      The console will enable the required APIs and create a Gemini API key in your project. You can set up billing later if you want to upgrade your pricing plan.

    • Vertex AI Gemini APIbilling required (requires the pay-as-you-go Blaze pricing plan)
      The console will help you set up billing and enable the required APIs in your project.

  5. If prompted in the console's workflow, follow the on-screen instructions to register your app and connect it to Firebase.

  6. Continue to the next step in this guide to add the SDK to your app.

Step 3: Add the SDK

The Firebase library provides access to the APIs for interacting with generative models. The library is included as part of the Firebase JavaScript SDK for Web.

  1. Install the Firebase JS SDK for Web using npm.

    The hybrid feature is released under a different npm tag, so make sure to include it in your installation command.

    npm install firebase@eap-ai-hybridinference
    
  2. Initialize Firebase in your app:

    import { initializeApp } from "firebase/app";
    
    // TODO(developer) Replace the following with your app's Firebase configuration
    // See: https://firebase.google.com/docs/web/learn-more#config-object
    const firebaseConfig = {
      // ...
    };
    
    // Initialize FirebaseApp
    const firebaseApp = initializeApp(firebaseConfig);
    

Step 4: Initialize the service and create a model instance

Click your Gemini API provider to view provider-specific content and code on this page.

Before sending a prompt to a Gemini model, initialize the service for your chosen API provider and create a GenerativeModel instance.

Set the mode to one of:

  • prefer_on_device: Configures the SDK to use the on-device model if it's available, or fall back to the cloud-hosted model.

  • only_on_device: Configures the SDK to use the on-device model or throw an exception.

  • only_in_cloud: Configures the SDK to never use the on-device model.

By default when you use prefer_on_device or only_in_cloud, the cloud-hosted model is gemini-2.0-flash-lite, but you can override the default.

import { initializeApp } from "firebase/app";
import { getAI, getGenerativeModel, GoogleAIBackend } from "firebase/ai";

// TODO(developer) Replace the following with your app's Firebase configuration
// See: https://firebase.google.com/docs/web/learn-more#config-object
const firebaseConfig = {
  // ...
};

// Initialize FirebaseApp
const firebaseApp = initializeApp(firebaseConfig);

// Initialize the Gemini Developer API backend service
const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });

// Create a `GenerativeModel` instance
// Set the mode, for example to use on-device model when possible
const model = getGenerativeModel(ai, { mode: "prefer_on_device" });

Send a prompt request to a model

This section provides examples for how to send various types of input to generate different types of output, including:

If you want to generate structured output (like JSON or enums), then use one of the following "generate text" examples and additionally configure the model to respond according to a provided schema.

Generate text from text-only input

Before trying this sample, make sure that you've completed the Get started section of this guide.

You can use generateContent() to generate text from a prompt that contains text:

// Imports + initialization of FirebaseApp and backend service + creation of model instance

// Wrap in an async function so you can use await
async function run() {
  // Provide a prompt that contains text
  const prompt = "Write a story about a magic backpack."

  // To generate text output, call `generateContent` with the text input
  const result = await model.generateContent(prompt);

  const response = result.response;
  const text = response.text();
  console.log(text);
}

run();

Generate text from text-and-image (multimodal) input

Before trying this sample, make sure that you've completed the Get started section of this guide.

You can use generateContent() to generate text from a prompt that contains text and image files—providing each input file's mimeType and the file itself.

The supported input image types for on-device inference are PNG and JPEG.

// Imports + initialization of FirebaseApp and backend service + creation of model instance

// Converts a File object to a Part object.
async function fileToGenerativePart(file) {
  const base64EncodedDataPromise = new Promise((resolve) => {
    const reader = new FileReader();
    reader.onloadend = () => resolve(reader.result.split(',')[1]);
    reader.readAsDataURL(file);
  });
  return {
    inlineData: { data: await base64EncodedDataPromise, mimeType: file.type },
  };
}

async function run() {
  // Provide a text prompt to include with the image
  const prompt = "Write a poem about this picture:";

  const fileInputEl = document.querySelector("input[type=file]");
  const imagePart = await fileToGenerativePart(fileInputEl.files[0]);

  // To generate text output, call `generateContent` with the text and image
  const result = await model.generateContent([prompt, imagePart]);

  const response = result.response;
  const text = response.text();
  console.log(text);
}

run();

What else can you do?

In addition to the examples above, you can also use alternative inference modes, override the default fallback model, and use model configuration to control responses.

Use alternative inference modes

The examples above used the prefer_on_device mode to configure the SDK to use an on-device model if it's available, or fall back to a cloud-hosted model. The SDK offers two alternative inference modes: only_on_device and only_in_cloud.

  • Use only_on_device mode so that the SDK can only use an on-device model. In this configuration, the API will throw an error if an on-device model is not available.

    const model = getGenerativeModel(ai, { mode: "only_on_device" });
    
  • Use only_in_cloud mode so that the SDK can only use a cloud-hosted model.

    const model = getGenerativeModel(ai, { mode: "only_in_cloud" });
    

Override the default fallback model

When you use the prefer_on_device mode, the SDK will fall back to using a cloud-hosted model if an on-device model is unavailable. The default fallback cloud-hosted model is gemini-2.0-flash-lite. This cloud-hosted model is also the default when you use theonly_in_cloud mode.

You can use the inCloudParams configuration option to specify an alternative default cloud-hosted model:

const model = getGenerativeModel(ai, {
  mode: 'prefer_on_device',
  inCloudParams: {
    model: "gemini-2.0-flash"
  }
});

Find model names for all supported Gemini models.

Use model configuration to control responses

In each request to a model, you can send along a model configuration to control how the model generates a response. Cloud-hosted models and on-device models offer different configuration options.

The configuration is maintained for the lifetime of the instance. If you want to use a different config, create a new GenerativeModel instance with that config.

Set the configuration for a cloud-hosted model

Use the inCloudParams option to configure a cloud-hosted Gemini model. Learn about available parameters.

const model = getGenerativeModel(ai, {
  mode: 'prefer_on_device',
  inCloudParams: {
    model: "gemini-2.0-flash"
    temperature: 0.8,
    topK: 10
  }
});

Set the configuration for an on-device model

Note that inference using an on-device model uses the Prompt API from Chrome.

Use the onDeviceParams option to configure an on-device model. Learn about available parameters.

const model = getGenerativeModel(ai, {
  mode: 'prefer_on_device',
  onDeviceParams: {
    createOptions: {
      temperature: 0.8,
      topK: 8
    }
  }
});

Set the configuration for structured output

Generating structured output (like JSON and enums) is supported for inference using both cloud-hosted and on-device models.

For hybrid inference, use both inCloudParams and onDeviceParams to configure the model to respond with structured output. For the other modes, use only the applicable configuration.

  • For inCloudParams: Specify the appropriate responseMimeType (in this example, application/json) as well as the responseSchema that you want the model to use.

  • For onDeviceParams: Specify the responseConstraint that you want the model to use.

JSON output

The following example adapts the general JSON output example for hybrid inference:

import {
  getAI,
  getGenerativeModel,
  Schema
} from "firebase/ai";

const jsonSchema = Schema.object({
 properties: {
    characters: Schema.array({
      items: Schema.object({
        properties: {
          name: Schema.string(),
          accessory: Schema.string(),
          age: Schema.number(),
          species: Schema.string(),
        },
        optionalProperties: ["accessory"],
      }),
    }),
  }
});

const model = getGenerativeModel(ai, {
  mode: 'prefer_on_device',
  inCloudParams: {
    model: "gemini-2.0-flash"
    generationConfig: {
      responseMimeType: "application/json",
      responseSchema: jsonSchema
    },
  }
  onDeviceParams: {
    promptOptions: {
      responseConstraint: jsonSchema
    }
  }
});
Enum output

As above, but adapting the documentation on enum output for hybrid inference:

// ...

const enumSchema = Schema.enumString({
  enum: ["drama", "comedy", "documentary"],
});

const model = getGenerativeModel(ai, {

// ...

    generationConfig: {
      responseMimeType: "text/x.enum",
      responseSchema: enumSchema
    },

// ...

Features not yet available for on-device inference

As an experimental release, not all the capabilities of the Web SDK are available for on-device inference. The following features are not yet supported for on-device inference (but they are usually available for cloud-based inference).

  • Generating text from image file input types other than JPEG and PNG

    • Can fallback to the cloud-hosted model; however, only_on_device mode will throw an error.
  • Generating text from audio, video, and documents (like PDFs) inputs

    • Can fallback to the cloud-hosted model; however, only_on_device mode will throw an error.
  • Generating images using Gemini or Imagen models

    • Can fallback to the cloud-hosted model; however, only_on_device mode will throw an error.
  • Providing files using URLs in multimodal requests. You must provide files as inline data to on-device models.

  • Multi-turn chat

    • Can fallback to the cloud-hosted model; however, only_on_device mode will throw an error.
  • Bi-directional streaming with the Gemini Live API

    • Note that this isn't supported by the Firebase AI Logic client SDK for Web even for cloud-hosted models.
  • Function calling

    • Coming soon!
  • Count tokens

    • Always throws an error. The count will differ between cloud-hosted and on-device models, so there is no intuitive fallback.
  • AI monitoring in the Firebase console for on-device inference.

    • Note that any inference using the cloud-hosted models can be monitored just like other inference using the Firebase AI Logic client SDK for Web.


Give feedback about your experience with Firebase AI Logic