使用 Gemini API 建構多輪對話 (即時通訊)

使用 Gemini API 時,您可以建立多輪自由形式的對話。Firebase AI Logic SDK 會管理對話狀態,簡化這個程序,因此您不必像使用 generateContent() (或 generateContentStream()) 那樣自行儲存對話記錄。

事前準備

按一下您的 Gemini API 供應商,即可在本頁查看供應商專屬內容和程式碼。

如果您尚未完成,請參閱入門指南,瞭解如何設定 Firebase 專案、將應用程式連結至 Firebase、新增 SDK、為所選 Gemini API 供應器初始化後端服務,以及建立 GenerativeModel 例項。

如要測試並重複提示,甚至取得產生的程式碼片段,建議您使用 Google AI Studio

傳送即時通訊提示要求

在嘗試這個範例前,請先完成本指南的「開始前」一節,設定專案和應用程式。
在該部分,您也需要點選所選Gemini API供應商的按鈕,才能在本頁面上看到供應商專屬內容

如要建立多輪對話 (例如即時通訊),請先呼叫 startChat() 來初始化即時通訊。接著使用 sendMessage() 傳送新使用者訊息,這也會附加訊息和回應至聊天記錄。

與對話內容相關聯的 role 有兩種可能的選項:

  • user:提供提示的角色。這個值是對 sendMessage() 呼叫的預設值,如果傳遞不同的角色,函式就會擲回例外狀況。

  • model:提供回應的角色。這個角色可用於以現有 history 呼叫 startChat()

Swift

您可以呼叫 startChat()sendMessage() 來傳送新使用者訊息:


import FirebaseAI

// Initialize the Gemini Developer API backend service
let ai = FirebaseAI.firebaseAI(backend: .googleAI())

// Create a `GenerativeModel` instance with a model that supports your use case
let model = ai.generativeModel(modelName: "gemini-2.0-flash")


// Optionally specify existing chat history
let history = [
  ModelContent(role: "user", parts: "Hello, I have 2 dogs in my house."),
  ModelContent(role: "model", parts: "Great to meet you. What would you like to know?"),
]

// Initialize the chat with optional chat history
let chat = model.startChat(history: history)

// To generate text output, call sendMessage and pass in the message
let response = try await chat.sendMessage("How many paws are in my house?")
print(response.text ?? "No text in response.")

Kotlin

您可以呼叫 startChat()sendMessage() 來傳送新使用者訊息:

對於 Kotlin,這個 SDK 中的函式為暫停函式,需要從 協同程式範圍中呼叫。

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a model that supports your use case
val model = Firebase.ai(backend = GenerativeBackend.googleAI())
                        .generativeModel("gemini-2.0-flash")


// Initialize the chat
val chat = generativeModel.startChat(
  history = listOf(
    content(role = "user") { text("Hello, I have 2 dogs in my house.") },
    content(role = "model") { text("Great to meet you. What would you like to know?") }
  )
)

val response = chat.sendMessage("How many paws are in my house?")
print(response.text)

Java

您可以呼叫 startChat()sendMessage() 來傳送新使用者訊息:

對於 Java,這個 SDK 中的各個方法會傳回 ListenableFuture

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a model that supports your use case
GenerativeModel ai = FirebaseAI.getInstance(GenerativeBackend.googleAI())
        .generativeModel("gemini-2.0-flash");

// Use the GenerativeModelFutures Java compatibility layer which offers
// support for ListenableFuture and Publisher APIs
GenerativeModelFutures model = GenerativeModelFutures.from(ai);


// (optional) Create previous chat history for context
Content.Builder userContentBuilder = new Content.Builder();
userContentBuilder.setRole("user");
userContentBuilder.addText("Hello, I have 2 dogs in my house.");
Content userContent = userContentBuilder.build();

Content.Builder modelContentBuilder = new Content.Builder();
modelContentBuilder.setRole("model");
modelContentBuilder.addText("Great to meet you. What would you like to know?");
Content modelContent = userContentBuilder.build();

List<Content> history = Arrays.asList(userContent, modelContent);

// Initialize the chat
ChatFutures chat = model.startChat(history);

// Create a new user message
Content.Builder messageBuilder = new Content.Builder();
messageBuilder.setRole("user");
messageBuilder.addText("How many paws are in my house?");

Content message = messageBuilder.build();

// Send the message
ListenableFuture<GenerateContentResponse> response = chat.sendMessage(message);
Futures.addCallback(response, new FutureCallback<GenerateContentResponse>() {
    @Override
    public void onSuccess(GenerateContentResponse result) {
        String resultText = result.getText();
        System.out.println(resultText);
    }

    @Override
    public void onFailure(Throwable t) {
        t.printStackTrace();
    }
}, executor);

Web

您可以呼叫 startChat()sendMessage() 來傳送新使用者訊息:


import { initializeApp } from "firebase/app";
import { getAI, getGenerativeModel, GoogleAIBackend } from "firebase/ai";

// TODO(developer) Replace the following with your app's Firebase configuration
// See: https://firebase.google.com/docs/web/learn-more#config-object
const firebaseConfig = {
  // ...
};

// Initialize FirebaseApp
const firebaseApp = initializeApp(firebaseConfig);

// Initialize the Gemini Developer API backend service
const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });

// Create a `GenerativeModel` instance with a model that supports your use case
const model = getGenerativeModel(ai, { model: "gemini-2.0-flash" });


async function run() {
  const chat = model.startChat({
    history: [
      {
        role: "user",
        parts: [{ text: "Hello, I have 2 dogs in my house." }],
      },
      {
        role: "model",
        parts: [{ text: "Great to meet you. What would you like to know?" }],
      },
    ],
    generationConfig: {
      maxOutputTokens: 100,
    },
  });

  const msg = "How many paws are in my house?";

  const result = await chat.sendMessage(msg);

  const response = await result.response;
  const text = response.text();
  console.log(text);
}

run();

Dart

您可以呼叫 startChat()sendMessage() 來傳送新使用者訊息:


import 'package:firebase_ai/firebase_ai.dart';
import 'package:firebase_core/firebase_core.dart';
import 'firebase_options.dart';

// Initialize FirebaseApp
await Firebase.initializeApp(
  options: DefaultFirebaseOptions.currentPlatform,
);

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a model that supports your use case
final model =
      FirebaseAI.googleAI().generativeModel(model: 'gemini-2.0-flash');


final chat = model.startChat();
// Provide a prompt that contains text
final prompt = [Content.text('Write a story about a magic backpack.')];

final response = await chat.sendMessage(prompt);
print(response.text);

Unity

您可以呼叫 StartChat()SendMessageAsync() 來傳送新使用者訊息:


using Firebase;
using Firebase.AI;

// Initialize the Gemini Developer API backend service
var ai = FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI());

// Create a `GenerativeModel` instance with a model that supports your use case
var model = ai.GetGenerativeModel(modelName: "gemini-2.0-flash");


// Optionally specify existing chat history
var history = new [] {
  ModelContent.Text("Hello, I have 2 dogs in my house."),
  new ModelContent("model", new ModelContent.TextPart("Great to meet you. What would you like to know?")),
};

// Initialize the chat with optional chat history
var chat = model.StartChat(history);

// To generate text output, call SendMessageAsync and pass in the message
var response = await chat.SendMessageAsync("How many paws are in my house?");
UnityEngine.Debug.Log(response.Text ?? "No text in response.");

瞭解如何選擇適合用途和應用程式的模型

逐句顯示回應

在嘗試這個範例前,請先完成本指南的「開始前」一節,設定專案和應用程式。
在該部分,您也需要點選所選Gemini API供應商的按鈕,才能在本頁面上看到供應商專屬內容

您可以不等待模型產生的完整結果,改用串流處理部分結果,以便加快互動速度。如要串流回應,請呼叫 sendMessageStream()



你還可以做些什麼?

試用其他功能

瞭解如何控管內容產生作業

您也可以嘗試使用提示和模型設定,甚至使用 Google AI Studio 取得產生的程式碼片段。

進一步瞭解支援的型號

瞭解可用於各種用途的模型,以及相關配額價格


針對使用 Firebase AI Logic 的體驗提供意見回饋