You can ask a Gemini model to analyze audio files that you provide either inline (base64-encoded) or via URL. When you use Vertex AI in Firebase, you can make this request directly from your app.
With this capability, you can do things like:
- Describe, summarize, or answer questions about audio content
- Transcribe audio content
- Analyze specific segments of audio using timestamps
Jump to code samples Jump to code for streamed responses
See other guides for additional options for working with audio Generate structured output Multi-turn chat Bidirectional streaming |
Before you begin
If you haven't already, complete the
getting started guide, which describes how to
set up your Firebase project, connect your app to Firebase, add the SDK,
initialize the Vertex AI service, and create a GenerativeModel
instance.
For testing and iterating on your prompts and even getting a generated code snippet, we recommend using Vertex AI Studio.
You can use this publicly available file with a MIME type of
audio/mp3
(view or download file).https://storage.googleapis.com/cloud-samples-data/generative-ai/audio/pixel.mp3
Send an audio file (base64-encoded) & receive text
Make sure that you've completed the Before you begin section of this guide before trying this sample.
You can ask a Gemini model to
generate text by prompting with text and audio—providing the
input file's mimeType
and the file itself. Find
requirements and recommendations for input files
later on this page.
Swift
You can call
generateContent()
to generate text from multimodal input of text and a single audio file.
import FirebaseVertexAI
// Initialize the Vertex AI service
let vertex = VertexAI.vertexAI()
// Create a `GenerativeModel` instance with a model that supports your use case
let model = vertex.generativeModel(modelName: "gemini-2.0-flash")
// Provide the audio as `Data`
guard let audioData = try? Data(contentsOf: audioURL) else {
print("Error loading audio data.")
return // Or handle the error appropriately
}
// Specify the appropriate audio MIME type
let audio = InlineDataPart(data: audioData, mimeType: "audio/mpeg")
// Provide a text prompt to include with the audio
let prompt = "Transcribe what's said in this audio recording."
// To generate text output, call `generateContent` with the audio and text prompt
let response = try await model.generateContent(audio, prompt)
// Print the generated text, handling the case where it might be nil
print(response.text ?? "No text in response.")
Kotlin
You can call
generateContent()
to generate text from multimodal input of text and a single audio file.
// Initialize the Vertex AI service and create a `GenerativeModel` instance
// Specify a model that supports your use case
val generativeModel = Firebase.vertexAI.generativeModel("gemini-2.0-flash")
val contentResolver = applicationContext.contentResolver
val inputStream = contentResolver.openInputStream(audioUri)
if (inputStream != null) { // Check if the audio loaded successfully
inputStream.use { stream ->
val bytes = stream.readBytes()
// Provide a prompt that includes the audio specified above and text
val prompt = content {
inlineData(bytes, "audio/mpeg") // Specify the appropriate audio MIME type
text("Transcribe what's said in this audio recording.")
}
// To generate text output, call `generateContent` with the prompt
val response = generativeModel.generateContent(prompt)
// Log the generated text, handling the case where it might be null
Log.d(TAG, response.text?: "")
}
} else {
Log.e(TAG, "Error getting input stream for audio.")
// Handle the error appropriately
}
Java
You can call
generateContent()
to generate text from multimodal input of text and a single audio file.
ListenableFuture
.
// Initialize the Vertex AI service and create a `GenerativeModel` instance
// Specify a model that supports your use case
GenerativeModel gm = FirebaseVertexAI.getInstance()
.generativeModel("gemini-2.0-flash");
GenerativeModelFutures model = GenerativeModelFutures.from(gm);
ContentResolver resolver = getApplicationContext().getContentResolver();
try (InputStream stream = resolver.openInputStream(audioUri)) {
File audioFile = new File(new URI(audioUri.toString()));
int audioSize = (int) audioFile.length();
byte audioBytes = new byte[audioSize];
if (stream != null) {
stream.read(audioBytes, 0, audioBytes.length);
stream.close();
// Provide a prompt that includes the audio specified above and text
Content prompt = new Content.Builder()
.addInlineData(audioBytes, "audio/mpeg") // Specify the appropriate audio MIME type
.addText("Transcribe what's said in this audio recording.")
.build();
// To generate text output, call `generateContent` with the prompt
ListenableFuture<GenerateContentResponse> response = model.generateContent(prompt);
Futures.addCallback(response, new FutureCallback<GenerateContentResponse>() {
@Override
public void onSuccess(GenerateContentResponse result) {
String text = result.getText();
Log.d(TAG, (text == null) ? "" : text);
}
@Override
public void onFailure(Throwable t) {
Log.e(TAG, "Failed to generate a response", t);
}
}, executor);
} else {
Log.e(TAG, "Error getting input stream for file.");
// Handle the error appropriately
}
} catch (IOException e) {
Log.e(TAG, "Failed to read the audio file", e);
} catch (URISyntaxException e) {
Log.e(TAG, "Invalid audio file", e);
}
Web
You can call
generateContent()
to generate text from multimodal input of text and a single audio file.
import { initializeApp } from "firebase/app";
import { getVertexAI, getGenerativeModel } from "firebase/vertexai";
// TODO(developer) Replace the following with your app's Firebase configuration
// See: https://firebase.google.com/docs/web/learn-more#config-object
const firebaseConfig = {
// ...
};
// Initialize FirebaseApp
const firebaseApp = initializeApp(firebaseConfig);
// Initialize the Vertex AI service
const vertexAI = getVertexAI(firebaseApp);
// Create a `GenerativeModel` instance with a model that supports your use case
const model = getGenerativeModel(vertexAI, { model: "gemini-2.0-flash" });
// Converts a File object to a Part object.
async function fileToGenerativePart(file) {
const base64EncodedDataPromise = new Promise((resolve) => {
const reader = new FileReader();
reader.onloadend = () => resolve(reader.result.split(','));
reader.readAsDataURL(file);
});
return {
inlineData: { data: await base64EncodedDataPromise, mimeType: file.type },
};
}
async function run() {
// Provide a text prompt to include with the audio
const prompt = "Transcribe what's said in this audio recording.";
// Prepare audio for input
const fileInputEl = document.querySelector("input[type=file]");
const audioPart = await fileToGenerativePart(fileInputEl.files);
// To generate text output, call `generateContent` with the text and audio
const result = await model.generateContent([prompt, audioPart]);
// Log the generated text, handling the case where it might be undefined
console.log(result.response.text() ?? "No text in response.");
}
run();
Dart
You can call
generateContent()
to generate text from multimodal input of text and a single audio file.
import 'package:firebase_vertexai/firebase_vertexai.dart';
import 'package:firebase_core/firebase_core.dart';
import 'firebase_options.dart';
await Firebase.initializeApp(
options: DefaultFirebaseOptions.currentPlatform,
);
// Initialize the Vertex AI service and create a `GenerativeModel` instance
// Specify a model that supports your use case
final model =
FirebaseVertexAI.instance.generativeModel(model: 'gemini-2.0-flash');
// Provide a text prompt to include with the audio
final prompt = TextPart("Transcribe what's said in this audio recording.");
// Prepare audio for input
final audio = await File('audio0.mp3').readAsBytes();
// Provide the audio as `Data` with the appropriate audio MIME type
final audioPart = InlineDataPart('audio/mpeg', audio);
// To generate text output, call `generateContent` with the text and audio
final response = await model.generateContent([
Content.multi([prompt,audioPart])
]);
// Print the generated text
print(response.text);
Learn how to choose a model and optionally a location appropriate for your use case and app.
Stream the response
Make sure that you've completed the Before you begin section of this guide before trying this sample.
You can achieve faster interactions by not waiting for the entire result from
the model generation, and instead use streaming to handle partial results.
To stream the response, call generateContentStream
.
Swift
You can call
generateContentStream()
to stream generated text from multimodal input of text and a single audio file.
import FirebaseVertexAI
// Initialize the Vertex AI service
let vertex = VertexAI.vertexAI()
// Create a `GenerativeModel` instance with a model that supports your use case
let model = vertex.generativeModel(modelName: "gemini-2.0-flash")
// Provide the audio as `Data`
guard let audioData = try? Data(contentsOf: audioURL) else {
print("Error loading audio data.")
return // Or handle the error appropriately
}
// Specify the appropriate audio MIME type
let audio = InlineDataPart(data: audioData, mimeType: "audio/mpeg")
// Provide a text prompt to include with the audio
let prompt = "Transcribe what's said in this audio recording."
// To stream generated text output, call `generateContentStream` with the audio and text prompt
let contentStream = try model.generateContentStream(audio, prompt)
// Print the generated text, handling the case where it might be nil
for try await chunk in contentStream {
if let text = chunk.text {
print(text)
}
}
Kotlin
You can call
generateContentStream()
to stream generated text from multimodal input of text and a single audio file.
// Initialize the Vertex AI service and create a `GenerativeModel` instance
// Specify a model that supports your use case
val generativeModel = Firebase.vertexAI.generativeModel("gemini-2.0-flash")
val contentResolver = applicationContext.contentResolver
val inputStream = contentResolver.openInputStream(audioUri)
if (inputStream != null) { // Check if the audio loaded successfully
inputStream.use { stream ->
val bytes = stream.readBytes()
// Provide a prompt that includes the audio specified above and text
val prompt = content {
inlineData(bytes, "audio/mpeg") // Specify the appropriate audio MIME type
text("Transcribe what's said in this audio recording.")
}
// To stream generated text output, call `generateContentStream` with the prompt
var fullResponse = ""
generativeModel.generateContentStream(prompt).collect { chunk ->
// Log the generated text, handling the case where it might be null
Log.d(TAG, chunk.text?: "")
fullResponse += chunk.text?: ""
}
}
} else {
Log.e(TAG, "Error getting input stream for audio.")
// Handle the error appropriately
}
Java
You can call
generateContentStream()
to stream generated text from multimodal input of text and a single audio file.
Publisher
type from the Reactive Streams library.
// Initialize the Vertex AI service and create a `GenerativeModel` instance
// Specify a model that supports your use case
GenerativeModel gm = FirebaseVertexAI.getInstance()
.generativeModel("gemini-2.0-flash");
GenerativeModelFutures model = GenerativeModelFutures.from(gm);
ContentResolver resolver = getApplicationContext().getContentResolver();
try (InputStream stream = resolver.openInputStream(audioUri)) {
File audioFile = new File(new URI(audioUri.toString()));
int audioSize = (int) audioFile.length();
byte audioBytes = new byte[audioSize];
if (stream != null) {
stream.read(audioBytes, 0, audioBytes.length);
stream.close();
// Provide a prompt that includes the audio specified above and text
Content prompt = new Content.Builder()
.addInlineData(audioBytes, "audio/mpeg") // Specify the appropriate audio MIME type
.addText("Transcribe what's said in this audio recording.")
.build();
// To stream generated text output, call `generateContentStream` with the prompt
Publisher<GenerateContentResponse> streamingResponse =
model.generateContentStream(prompt);
StringBuilder fullResponse = new StringBuilder();
streamingResponse.subscribe(new Subscriber<GenerateContentResponse>() {
@Override
public void onNext(GenerateContentResponse generateContentResponse) {
String chunk = generateContentResponse.getText();
String text = (chunk == null) ? "" : chunk;
Log.d(TAG, text);
fullResponse.append(text);
}
@Override
public void onComplete() {
Log.d(TAG, fullResponse.toString());
}
@Override
public void onError(Throwable t) {
Log.e(TAG, "Failed to generate a response", t);
}
@Override
public void onSubscribe(Subscription s) {
}
});
} else {
Log.e(TAG, "Error getting input stream for file.");
// Handle the error appropriately
}
} catch (IOException e) {
Log.e(TAG, "Failed to read the audio file", e);
} catch (URISyntaxException e) {
Log.e(TAG, "Invalid audio file", e);
}
Web
You can call
generateContentStream()
to stream generated text from multimodal input of text and a single audio file.
import { initializeApp } from "firebase/app";
import { getVertexAI, getGenerativeModel } from "firebase/vertexai";
// TODO(developer) Replace the following with your app's Firebase configuration
// See: https://firebase.google.com/docs/web/learn-more#config-object
const firebaseConfig = {
// ...
};
// Initialize FirebaseApp
const firebaseApp = initializeApp(firebaseConfig);
// Initialize the Vertex AI service
const vertexAI = getVertexAI(firebaseApp);
// Create a `GenerativeModel` instance with a model that supports your use case
const model = getGenerativeModel(vertexAI, { model: "gemini-2.0-flash" });
// Converts a File object to a Part object.
async function fileToGenerativePart(file) {
const base64EncodedDataPromise = new Promise((resolve) => {
const reader = new FileReader();
reader.onloadend = () => resolve(reader.result.split(','));
reader.readAsDataURL(file);
});
return {
inlineData: { data: await base64EncodedDataPromise, mimeType: file.type },
};
}
async function run() {
// Provide a text prompt to include with the audio
const prompt = "Transcribe what's said in this audio recording.";
// Prepare audio for input
const fileInputEl = document.querySelector("input[type=file]");
const audioPart = await fileToGenerativePart(fileInputEl.files);
// To stream generated text output, call `generateContentStream` with the text and audio
const result = await model.generateContentStream([prompt, audioPart]);
// Log the generated text
for await (const chunk of result.stream) {
const chunkText = chunk.text();
console.log(chunkText);
}
}
run();
Dart
You can call
generateContentStream()
to stream generated text from multimodal input of text and a single audio file.
import 'package:firebase_vertexai/firebase_vertexai.dart';
import 'package:firebase_core/firebase_core.dart';
import 'firebase_options.dart';
await Firebase.initializeApp(
options: DefaultFirebaseOptions.currentPlatform,
);
// Initialize the Vertex AI service and create a `GenerativeModel` instance
// Specify a model that supports your use case
final model =
FirebaseVertexAI.instance.generativeModel(model: 'gemini-2.0-flash');
// Provide a text prompt to include with the audio
final prompt = TextPart("Transcribe what's said in this audio recording.");
// Prepare audio for input
final audio = await File('audio0.mp3').readAsBytes();
// Provide the audio as `Data` with the appropriate audio MIME type
final audioPart = InlineDataPart('audio/mpeg', audio);
// To stream generated text output, call `generateContentStream` with the text and audio
final response = await model.generateContentStream([
Content.multi([prompt, audioPart])
]);
// Print the generated text
await for (final chunk in response) {
print(chunk.text);
}
Requirements and recommendations for input audio files
See "Supported input files and requirements for the Vertex AI Gemini API" to learn detailed information about the following:
- Different options for providing a file in a request (either inline or using the file's URL or URI)
- Requirements and best practices for audio files
Supported audio MIME types
Gemini multimodal models support the following audio MIME types:
Audio MIME type | Gemini 2.0 Flash | Gemini 2.0 Flash‑Lite |
---|---|---|
AAC - audio/aac |
||
FLAC - audio/flac |
||
MP3 - audio/mp3 |
||
MPA - audio/m4a |
||
MPEG - audio/mpeg |
||
MPGA - audio/mpga |
||
MP4 - audio/mp4 |
||
OPUS - audio/opus |
||
PCM - audio/pcm |
||
WAV - audio/wav |
||
WEBM - audio/webm |
Limits per request
You can include a maximum of
What else can you do?
- Learn how to count tokens before sending long prompts to the model.
- Set up Cloud Storage for Firebase so that you can include large files in your multimodal requests and have a more managed solution for providing files in prompts. Files can include images, PDFs, video, and audio.
- Start thinking about preparing for production, including setting up Firebase App Check to protect the Gemini API from abuse by unauthorized clients. Also, make sure to review the production checklist.
Try out other capabilities
- Build multi-turn conversations (chat).
- Generate text from text-only prompts.
- Generate structured output (like JSON) from both text and multimodal prompts.
- Generate images from text prompts.
- Use function calling to connect generative models to external systems and information.
Learn how to control content generation
- Understand prompt design, including best practices, strategies, and example prompts.
- Configure model parameters like temperature and maximum output tokens (for Gemini) or aspect ratio and person generation (for Imagen).
- Use safety settings to adjust the likelihood of getting responses that may be considered harmful.
Learn more about the supported models
Learn about the models available for various use cases and their quotas and pricing.Give feedback about your experience with Vertex AI in Firebase