使用 Gemini API 生成文字

您可以要求 Gemini 模型根據僅限文字提示或多模態提示生成文字。使用 Firebase AI Logic 時,您可以直接透過應用程式提出這項要求。

多模態提示可包含多種輸入內容 (例如文字、圖片、PDF、純文字檔案、音訊和影片)。

本指南說明如何根據純文字提示和包含檔案的基本多模態提示產生文字。

跳至僅文字輸入的程式碼範例 跳至多模輸入的程式碼範例


如需其他文字處理選項,請參閱其他指南
產生結構化輸出內容 多輪對話 雙向串流 在裝置上產生文字 從文字產生圖片

事前準備

按一下您的 Gemini API 供應商,即可在本頁查看供應商專屬內容和程式碼。

如果您尚未完成,請參閱入門指南,瞭解如何設定 Firebase 專案、將應用程式連結至 Firebase、新增 SDK、為所選 Gemini API 供應器初始化後端服務,以及建立 GenerativeModel 例項。

如要測試並重複提示,甚至取得產生的程式碼片段,建議您使用 Google AI Studio

從純文字輸入內容生成文字

在嘗試這個範例前,請先完成本指南的「開始前」一節,設定專案和應用程式。
在該部分,您也需要點選所選Gemini API供應商的按鈕,才能在本頁面上看到供應商專屬內容

您可以使用文字輸入提示,要求 Gemini 模型生成文字。

Swift

您可以呼叫 generateContent(),從純文字輸入內容產生文字。


import FirebaseAI

// Initialize the Gemini Developer API backend service
let ai = FirebaseAI.firebaseAI(backend: .googleAI())

// Create a `GenerativeModel` instance with a model that supports your use case
let model = ai.generativeModel(modelName: "gemini-2.0-flash")


// Provide a prompt that contains text
let prompt = "Write a story about a magic backpack."

// To generate text output, call generateContent with the text input
let response = try await model.generateContent(prompt)
print(response.text ?? "No text in response.")

Kotlin

您可以呼叫 generateContent(),從純文字輸入內容產生文字。

對於 Kotlin,這個 SDK 中的函式為暫停函式,需要從 協同程式範圍中呼叫。

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a model that supports your use case
val model = Firebase.ai(backend = GenerativeBackend.googleAI())
                        .generativeModel("gemini-2.0-flash")


// Provide a prompt that contains text
val prompt = "Write a story about a magic backpack."

// To generate text output, call generateContent with the text input
val response = generativeModel.generateContent(prompt)
print(response.text)

Java

您可以呼叫 generateContent(),從純文字輸入內容產生文字。

對於 Java,這個 SDK 中的各個方法會傳回 ListenableFuture

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a model that supports your use case
GenerativeModel ai = FirebaseAI.getInstance(GenerativeBackend.googleAI())
        .generativeModel("gemini-2.0-flash");

// Use the GenerativeModelFutures Java compatibility layer which offers
// support for ListenableFuture and Publisher APIs
GenerativeModelFutures model = GenerativeModelFutures.from(ai);


// Provide a prompt that contains text
Content prompt = new Content.Builder()
    .addText("Write a story about a magic backpack.")
    .build();

// To generate text output, call generateContent with the text input
ListenableFuture<GenerateContentResponse> response = model.generateContent(prompt);
Futures.addCallback(response, new FutureCallback<GenerateContentResponse>() {
    @Override
    public void onSuccess(GenerateContentResponse result) {
        String resultText = result.getText();
        System.out.println(resultText);
    }

    @Override
    public void onFailure(Throwable t) {
        t.printStackTrace();
    }
}, executor);

Web

您可以呼叫 generateContent(),從純文字輸入內容產生文字。


import { initializeApp } from "firebase/app";
import { getAI, getGenerativeModel, GoogleAIBackend } from "firebase/ai";

// TODO(developer) Replace the following with your app's Firebase configuration
// See: https://firebase.google.com/docs/web/learn-more#config-object
const firebaseConfig = {
  // ...
};

// Initialize FirebaseApp
const firebaseApp = initializeApp(firebaseConfig);

// Initialize the Gemini Developer API backend service
const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });

// Create a `GenerativeModel` instance with a model that supports your use case
const model = getGenerativeModel(ai, { model: "gemini-2.0-flash" });


// Wrap in an async function so you can use await
async function run() {
  // Provide a prompt that contains text
  const prompt = "Write a story about a magic backpack."

  // To generate text output, call generateContent with the text input
  const result = await model.generateContent(prompt);

  const response = result.response;
  const text = response.text();
  console.log(text);
}

run();

Dart

您可以呼叫 generateContent(),從純文字輸入內容產生文字。


import 'package:firebase_ai/firebase_ai.dart';
import 'package:firebase_core/firebase_core.dart';
import 'firebase_options.dart';

// Initialize FirebaseApp
await Firebase.initializeApp(
  options: DefaultFirebaseOptions.currentPlatform,
);

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a model that supports your use case
final model =
      FirebaseAI.googleAI().generativeModel(model: 'gemini-2.0-flash');


// Provide a prompt that contains text
final prompt = [Content.text('Write a story about a magic backpack.')];

// To generate text output, call generateContent with the text input
final response = await model.generateContent(prompt);
print(response.text);

Unity

您可以呼叫 GenerateContentAsync(),從純文字輸入內容產生文字。


using Firebase;
using Firebase.AI;

// Initialize the Gemini Developer API backend service
var ai = FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI());

// Create a `GenerativeModel` instance with a model that supports your use case
var model = ai.GetGenerativeModel(modelName: "gemini-2.0-flash");


// Provide a prompt that contains text
var prompt = "Write a story about a magic backpack.";

// To generate text output, call GenerateContentAsync with the text input
var response = await model.GenerateContentAsync(prompt);
UnityEngine.Debug.Log(response.Text ?? "No text in response.");

使用文字和檔案 (多模態) 輸入內容生成文字

在嘗試這個範例前,請先完成本指南的「開始前」一節,設定專案和應用程式。
在該部分,您也需要點選所選Gemini API供應商的按鈕,才能在本頁面上看到供應商專屬內容

您可以要求 Gemini 模型透過文字和檔案提示來產生文字,方法是提供每個輸入檔案的 mimeType 和檔案本身。請參閱本頁後續的輸入檔案相關規定和建議

下列範例說明如何從檔案輸入內容中產生文字,方法是分析以內嵌資料 (base64 編碼檔案) 形式提供的單一影片檔案。

請注意,此範例顯示如何在內文中提供檔案,但 SDK 也支援提供 YouTube 網址

Swift

您可以呼叫 generateContent() 來根據文字和影片檔案的多模態輸入內容生成文字。


import FirebaseAI

// Initialize the Gemini Developer API backend service
let ai = FirebaseAI.firebaseAI(backend: .googleAI())

// Create a `GenerativeModel` instance with a model that supports your use case
let model = ai.generativeModel(modelName: "gemini-2.0-flash")


// Provide the video as `Data` with the appropriate MIME type.
let video = InlineDataPart(data: try Data(contentsOf: videoURL), mimeType: "video/mp4")

// Provide a text prompt to include with the video
let prompt = "What is in the video?"

// To generate text output, call generateContent with the text and video
let response = try await model.generateContent(video, prompt)
print(response.text ?? "No text in response.")

Kotlin

您可以呼叫 generateContent() 來根據文字和影片檔案的多模態輸入內容生成文字。

對於 Kotlin,這個 SDK 中的函式為暫停函式,需要從 協同程式範圍中呼叫。

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a model that supports your use case
val model = Firebase.ai(backend = GenerativeBackend.googleAI())
                        .generativeModel("gemini-2.0-flash")


val contentResolver = applicationContext.contentResolver
contentResolver.openInputStream(videoUri).use { stream ->
  stream?.let {
    val bytes = stream.readBytes()

    // Provide a prompt that includes the video specified above and text
    val prompt = content {
        inlineData(bytes, "video/mp4")
        text("What is in the video?")
    }

    // To generate text output, call generateContent with the prompt
    val response = generativeModel.generateContent(prompt)
    Log.d(TAG, response.text ?: "")
  }
}

Java

您可以呼叫 generateContent() 來根據文字和影片檔案的多模態輸入內容生成文字。

對於 Java,這個 SDK 中的各個方法會傳回 ListenableFuture

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a model that supports your use case
GenerativeModel ai = FirebaseAI.getInstance(GenerativeBackend.googleAI())
        .generativeModel("gemini-2.0-flash");

// Use the GenerativeModelFutures Java compatibility layer which offers
// support for ListenableFuture and Publisher APIs
GenerativeModelFutures model = GenerativeModelFutures.from(ai);


ContentResolver resolver = getApplicationContext().getContentResolver();
try (InputStream stream = resolver.openInputStream(videoUri)) {
    File videoFile = new File(new URI(videoUri.toString()));
    int videoSize = (int) videoFile.length();
    byte[] videoBytes = new byte[videoSize];
    if (stream != null) {
        stream.read(videoBytes, 0, videoBytes.length);
        stream.close();

        // Provide a prompt that includes the video specified above and text
        Content prompt = new Content.Builder()
                .addInlineData(videoBytes, "video/mp4")
                .addText("What is in the video?")
                .build();

        // To generate text output, call generateContent with the prompt
        ListenableFuture<GenerateContentResponse> response = model.generateContent(prompt);
        Futures.addCallback(response, new FutureCallback<GenerateContentResponse>() {
            @Override
            public void onSuccess(GenerateContentResponse result) {
                String resultText = result.getText();
                System.out.println(resultText);
            }

            @Override
            public void onFailure(Throwable t) {
                t.printStackTrace();
            }
        }, executor);
    }
} catch (IOException e) {
    e.printStackTrace();
} catch (URISyntaxException e) {
    e.printStackTrace();
}

Web

您可以呼叫 generateContent() 來根據文字和影片檔案的多模態輸入內容生成文字。


import { initializeApp } from "firebase/app";
import { getAI, getGenerativeModel, GoogleAIBackend } from "firebase/ai";

// TODO(developer) Replace the following with your app's Firebase configuration
// See: https://firebase.google.com/docs/web/learn-more#config-object
const firebaseConfig = {
  // ...
};

// Initialize FirebaseApp
const firebaseApp = initializeApp(firebaseConfig);

// Initialize the Gemini Developer API backend service
const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });

// Create a `GenerativeModel` instance with a model that supports your use case
const model = getGenerativeModel(ai, { model: "gemini-2.0-flash" });


// Converts a File object to a Part object.
async function fileToGenerativePart(file) {
  const base64EncodedDataPromise = new Promise((resolve) => {
    const reader = new FileReader();
    reader.onloadend = () => resolve(reader.result.split(',')[1]);
    reader.readAsDataURL(file);
  });
  return {
    inlineData: { data: await base64EncodedDataPromise, mimeType: file.type },
  };
}

async function run() {
  // Provide a text prompt to include with the video
  const prompt = "What do you see?";

  const fileInputEl = document.querySelector("input[type=file]");
  const videoPart = await fileToGenerativePart(fileInputEl.files[0]);

  // To generate text output, call generateContent with the text and video
  const result = await model.generateContent([prompt, videoPart]);

  const response = result.response;
  const text = response.text();
  console.log(text);
}

run();

Dart

您可以呼叫 generateContent() 來根據文字和影片檔案的多模態輸入內容生成文字。


import 'package:firebase_ai/firebase_ai.dart';
import 'package:firebase_core/firebase_core.dart';
import 'firebase_options.dart';

// Initialize FirebaseApp
await Firebase.initializeApp(
  options: DefaultFirebaseOptions.currentPlatform,
);

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a model that supports your use case
final model =
      FirebaseAI.googleAI().generativeModel(model: 'gemini-2.0-flash');


// Provide a text prompt to include with the video
final prompt = TextPart("What's in the video?");

// Prepare video for input
final video = await File('video0.mp4').readAsBytes();

// Provide the video as `Data` with the appropriate mimetype
final videoPart = InlineDataPart('video/mp4', video);

// To generate text output, call generateContent with the text and images
final response = await model.generateContent([
  Content.multi([prompt, ...videoPart])
]);
print(response.text);

Unity

您可以呼叫 GenerateContentAsync() 來根據文字和影片檔案的多模態輸入內容生成文字。


using Firebase;
using Firebase.AI;

// Initialize the Gemini Developer API backend service
var ai = FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI());

// Create a `GenerativeModel` instance with a model that supports your use case
var model = ai.GetGenerativeModel(modelName: "gemini-2.0-flash");


// Provide the video as `data` with the appropriate MIME type.
var video = ModelContent.InlineData("video/mp4",
      System.IO.File.ReadAllBytes(System.IO.Path.Combine(
          UnityEngine.Application.streamingAssetsPath, "yourVideo.mp4")));

// Provide a text prompt to include with the video
var prompt = ModelContent.Text("What is in the video?");

// To generate text output, call GenerateContentAsync with the text and video
var response = await model.GenerateContentAsync(new [] { video, prompt });
UnityEngine.Debug.Log(response.Text ?? "No text in response.");

瞭解如何選擇適合用途和應用程式的模型

逐句顯示回應

在嘗試這個範例前,請先完成本指南的「開始前」一節,設定專案和應用程式。
在該部分,您也需要點選所選Gemini API供應商的按鈕,才能在本頁面上看到供應商專屬內容

您可以不等待模型產生的完整結果,改用串流處理部分結果,以便加快互動速度。如要串流回應,請呼叫 generateContentStream



輸入圖片檔案的規定和建議

請注意,以內嵌資料形式提供的檔案會在傳輸過程中編碼為 base64,因此會增加要求的大小。如果要求過大,您會收到 HTTP 413 錯誤。

請參閱「支援的 Vertex AI Gemini API 輸入檔案和相關規定」,進一步瞭解下列內容:

  • 在要求中提供檔案的不同選項 (內嵌或使用檔案的網址或 URI)
  • 支援的檔案類型
  • 支援的 MIME 類型和指定方式
  • 檔案和多模態要求的規定和最佳做法



你還可以做些什麼?

試用其他功能

瞭解如何控管內容產生作業

您也可以嘗試使用提示和模型設定,甚至使用 Google AI Studio 取得產生的程式碼片段。

進一步瞭解支援的型號

瞭解可用於各種用途的模型,以及相關配額價格


針對使用 Firebase AI Logic 的體驗提供意見回饋