Analyze video files using the Gemini API

You can ask a Gemini model to analyze video files that you provide either inline (base64-encoded) or via URL. When you use Vertex AI in Firebase, you can make this request directly from your app.

With this capability, you can do things like:

  • Caption and answer questions about videos
  • Analyze specific segments of a video using timestamps
  • Transcribe video content by processing both the audio track and visual frames
  • Describe, segment, and extract information from videos, including both the audio track and visual frames

Jump to code samples Jump to code for streamed responses


See other guides for additional options for working with video
Generate structured output Multi-turn chat

Before you begin

If you haven't already, complete the getting started guide, which describes how to set up your Firebase project, connect your app to Firebase, add the SDK, initialize the Vertex AI service, and create a GenerativeModel instance.

For testing and iterating on your prompts and even getting a generated code snippet, we recommend using Vertex AI Studio.

Send video files (base64-encoded) & receive text

Make sure that you've completed the Before you begin section of this guide before trying this sample.

You can ask a Gemini model to generate text by prompting with text and video—providing each input file's mimeType and the file itself. Find requirements and recommendations for input files later on this page.

Swift

You can call generateContent() to generate text from multimodal input of text and video files.

import FirebaseVertexAI

// Initialize the Vertex AI service
let vertex = VertexAI.vertexAI()

// Create a `GenerativeModel` instance with a model that supports your use case
let model = vertex.generativeModel(modelName: "gemini-2.0-flash")

// Provide the video as `Data` with the appropriate MIME type.
let video = InlineDataPart(data: try Data(contentsOf: videoURL), mimeType: "video/mp4")

// Provide a text prompt to include with the video
let prompt = "What is in the video?"

// To generate text output, call generateContent with the text and video
let response = try await model.generateContent(video, prompt)
print(response.text ?? "No text in response.")

Kotlin

You can call generateContent() to generate text from multimodal input of text and video files.

For Kotlin, the methods in this SDK are suspend functions and need to be called from a Coroutine scope.
// Initialize the Vertex AI service and create a `GenerativeModel` instance
// Specify a model that supports your use case
val generativeModel = Firebase.vertexAI.generativeModel("gemini-2.0-flash")

val contentResolver = applicationContext.contentResolver
contentResolver.openInputStream(videoUri).use { stream ->
  stream?.let {
    val bytes = stream.readBytes()

    // Provide a prompt that includes the video specified above and text
    val prompt = content {
        inlineData(bytes, "video/mp4")
        text("What is in the video?")
    }

    // To generate text output, call generateContent with the prompt
    val response = generativeModel.generateContent(prompt)
    Log.d(TAG, response.text ?: "")
  }
}

Java

You can call generateContent() to generate text from multimodal input of text and video files.

For Java, the methods in this SDK return a ListenableFuture.
// Initialize the Vertex AI service and create a `GenerativeModel` instance
// Specify a model that supports your use case
GenerativeModel gm = FirebaseVertexAI.getInstance()
        .generativeModel("gemini-2.0-flash");
GenerativeModelFutures model = GenerativeModelFutures.from(gm);

ContentResolver resolver = getApplicationContext().getContentResolver();
try (InputStream stream = resolver.openInputStream(videoUri)) {
    File videoFile = new File(new URI(videoUri.toString()));
    int videoSize = (int) videoFile.length();
    byte[] videoBytes = new byte[videoSize];
    if (stream != null) {
        stream.read(videoBytes, 0, videoBytes.length);
        stream.close();

        // Provide a prompt that includes the video specified above and text
        Content prompt = new Content.Builder()
                .addInlineData(videoBytes, "video/mp4")
                .addText("What is in the video?")
                .build();

        // To generate text output, call generateContent with the prompt
        ListenableFuture<GenerateContentResponse> response = model.generateContent(prompt);
        Futures.addCallback(response, new FutureCallback<GenerateContentResponse>() {
            @Override
            public void onSuccess(GenerateContentResponse result) {
                String resultText = result.getText();
                System.out.println(resultText);
            }

            @Override
            public void onFailure(Throwable t) {
                t.printStackTrace();
            }
        }, executor);
    }
} catch (IOException e) {
    e.printStackTrace();
} catch (URISyntaxException e) {
    e.printStackTrace();
}

Web

You can call generateContent() to generate text from multimodal input of text and video files.

import { initializeApp } from "firebase/app";
import { getVertexAI, getGenerativeModel } from "firebase/vertexai";

// TODO(developer) Replace the following with your app's Firebase configuration
// See: https://firebase.google.com/docs/web/learn-more#config-object
const firebaseConfig = {
  // ...
};

// Initialize FirebaseApp
const firebaseApp = initializeApp(firebaseConfig);

// Initialize the Vertex AI service
const vertexAI = getVertexAI(firebaseApp);

// Create a `GenerativeModel` instance with a model that supports your use case
const model = getGenerativeModel(vertexAI, { model: "gemini-2.0-flash" });

// Converts a File object to a Part object.
async function fileToGenerativePart(file) {
  const base64EncodedDataPromise = new Promise((resolve) => {
    const reader = new FileReader();
    reader.onloadend = () => resolve(reader.result.split(',')[1]);
    reader.readAsDataURL(file);
  });
  return {
    inlineData: { data: await base64EncodedDataPromise, mimeType: file.type },
  };
}

async function run() {
  // Provide a text prompt to include with the video
  const prompt = "What do you see?";

  const fileInputEl = document.querySelector("input[type=file]");
  const videoPart = await fileToGenerativePart(fileInputEl.files[0]);

  // To generate text output, call generateContent with the text and video
  const result = await model.generateContent([prompt, videoPart]);

  const response = result.response;
  const text = response.text();
  console.log(text);
}

run();

Dart

You can call generateContent() to generate text from multimodal input of text and video files.

import 'package:firebase_vertexai/firebase_vertexai.dart';
import 'package:firebase_core/firebase_core.dart';
import 'firebase_options.dart';

await Firebase.initializeApp(
  options: DefaultFirebaseOptions.currentPlatform,
);

// Initialize the Vertex AI service and create a `GenerativeModel` instance
// Specify a model that supports your use case
final model =
      FirebaseVertexAI.instance.generativeModel(model: 'gemini-2.0-flash');

// Provide a text prompt to include with the video
final prompt = TextPart("What's in the video?");

// Prepare video for input
final video = await File('video0.mp4').readAsBytes();

// Provide the video as `Data` with the appropriate mimetype
final videoPart = InlineDataPart('video/mp4', video);

// To generate text output, call generateContent with the text and images
final response = await model.generateContent([
  Content.multi([prompt, ...videoPart])
]);
print(response.text);

Learn how to choose a model and optionally a location appropriate for your use case and app.

Stream the response

Make sure that you've completed the Before you begin section of this guide before trying this sample.

You can achieve faster interactions by not waiting for the entire result from the model generation, and instead use streaming to handle partial results. To stream the response, call generateContentStream.



Requirements and recommendations for input video files

See "Supported input files and requirements for the Vertex AI Gemini API" to learn detailed information about the following:

Supported video MIME types

Gemini multimodal models support the following video MIME types:

Video MIME type Gemini 2.0 Flash Gemini 2.0 Flash‑Lite
FLV - video/x-flv
MOV - video/quicktime
MPEG - video/mpeg
MPEGPS - video/mpegps
MPG - video/mpg
MP4 - video/mp4
WEBM - video/webm
WMV - video/wmv
3GPP - video/3gpp

Limits per request

Here's the maximum number of video files allowed in a prompt request:

  • Gemini 2.0 Flash and Gemini 2.0 Flash‑Lite: 10 video files



What else can you do?

  • Learn how to count tokens before sending long prompts to the model.
  • Set up Cloud Storage for Firebase so that you can include large files in your multimodal requests and have a more managed solution for providing files in prompts. Files can include images, PDFs, video, and audio.
  • Start thinking about preparing for production, including setting up Firebase App Check to protect the Gemini API from abuse by unauthorized clients. Also, make sure to review the production checklist.

Try out other capabilities

Learn how to control content generation

You can also experiment with prompts and model configurations using Vertex AI Studio.

Learn more about the supported models

Learn about the models available for various use cases and their quotas and pricing.


Give feedback about your experience with Vertex AI in Firebase