Gerar imagens usando o Gemini


Você pode pedir a um modelo Gemini para gerar e editar imagens usando somente texto ou solicitações de texto e imagem. Ao usar Firebase AI Logic, é possível fazer essa solicitação diretamente no app.

Com esse recurso, é possível:

  • Gere imagens iterativamente por meio de conversas com linguagem natural, ajustando imagens e mantendo a consistência e o contexto.

  • Gerar imagens com renderização de texto de alta qualidade, incluindo longas strings de texto.

  • Gerar saída de texto e imagem intercalados. Por exemplo, uma postagem de blog com texto e imagens em uma única vez. Antes, isso exigia a união de vários modelos.

  • Gere imagens usando os recursos de raciocínio e conhecimento de mundo do Gemini.

Confira uma lista completa de modalidades e recursos compatíveis (com exemplos de comandos) mais adiante nesta página.

Para a saída de imagem, use o modelo Gemini gemini-2.0-flash-preview-image-generation e inclua responseModalities: ["TEXT", "IMAGE"] na configuração do modelo.

Ir para o código de conversão de texto em imagem Ir para o código de interpolação de texto e imagens

Ir para o código de edição de imagem Ir para o código de edição de imagem iterativa


Consulte outros guias para conferir outras opções de trabalho com imagens
Analisar imagens Analisar imagens no dispositivo Gerar saída estruturada

Como escolher entre os modelos Gemini e Imagen

Os SDKs Firebase AI Logic oferecem suporte à geração de imagens usando um modelo Gemini ou Imagen. Na maioria dos casos, comece com Gemini e escolha Imagen para tarefas especializadas em que a qualidade da imagem é fundamental.

Os SDKs Firebase AI Logic ainda não oferecem suporte à entrada de imagem (como para edição) com modelos Imagen. Portanto, se você quiser trabalhar com imagens de entrada, use um modelo Gemini.

Escolha Gemini para:

  • Usar conhecimento de mundo e raciocínio para gerar imagens contextualmente relevantes.
  • Para misturar texto e imagens de forma simples.
  • Para incorporar recursos visuais precisos em sequências de texto longas.
  • Editar imagens de forma conversacional, mantendo o contexto.

Escolha Imagen para:

  • Para priorizar a qualidade da imagem, o fotorrealismo, o detalhe artístico ou estilos específicos (por exemplo, impressionismo ou anime).
  • Especificar explicitamente a proporção ou o formato das imagens geradas.

Antes de começar

Clique no seu provedor de Gemini API para conferir o conteúdo e o código específicos do provedor nesta página.

Se ainda não tiver feito isso, conclua o guia de início, que descreve como configurar seu projeto do Firebase, conectar seu app ao Firebase, adicionar o SDK, inicializar o serviço de back-end para o provedor de Gemini API escolhido e criar uma instância de GenerativeModel.

Para testar e iterar comandos e até receber um snippet de código gerado, recomendamos usar Google AI Studio.

Modelos compatíveis com esse recurso

A saída de imagem de Gemini tem suporte apenas para gemini-2.0-flash-preview-image-generation (não gemini-2.0-flash).

Os SDKs também oferecem suporte à geração de imagens usando modelos Imagen.

Gerar e editar imagens

É possível gerar e editar imagens usando um modelo Gemini.

Gerar imagens (entrada somente de texto)

Antes de testar este exemplo, conclua a seção Antes de começar deste guia para configurar o projeto e o app.
Nessa seção, você também clicará em um botão do provedor Gemini API escolhido para acessar o conteúdo específico do provedor nessa página.

É possível pedir a um modelo Gemini para gerar imagens usando comandos com texto.

Crie uma instância GenerativeModel, inclua responseModalities: ["TEXT", "IMAGE"] na configuração do modelo e chame generateContent.

Swift


import FirebaseAI

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
let generativeModel = FirebaseAI.firebaseAI(backend: .googleAI()).generativeModel(
  modelName: "gemini-2.0-flash-preview-image-generation",
  // Configure the model to respond with text and images
  generationConfig: GenerationConfig(responseModalities: [.text, .image])
)

// Provide a text prompt instructing the model to generate an image
let prompt = "Generate an image of the Eiffel tower with fireworks in the background."

// To generate an image, call `generateContent` with the text input
let response = try await model.generateContent(prompt)

// Handle the generated image
guard let inlineDataPart = response.inlineDataParts.first else {
  fatalError("No image data in response.")
}
guard let uiImage = UIImage(data: inlineDataPart.data) else {
  fatalError("Failed to convert data to UIImage.")
}

Kotlin


// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
    modelName = "gemini-2.0-flash-preview-image-generation",
    // Configure the model to respond with text and images
    generationConfig = generationConfig {
responseModalities = listOf(ResponseModality.TEXT, ResponseModality.IMAGE) }
)

// Provide a text prompt instructing the model to generate an image
val prompt = "Generate an image of the Eiffel tower with fireworks in the background."

// To generate image output, call `generateContent` with the text input
val generatedImageAsBitmap = model.generateContent(prompt)
    // Handle the generated image
    .candidates.first().content.parts.firstNotNullOf { it.asImageOrNull() }

Java


// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
GenerativeModel ai = FirebaseAI.getInstance(GenerativeBackend.googleAI()).generativeModel(
    "gemini-2.0-flash-preview-image-generation",
    // Configure the model to respond with text and images
    new GenerationConfig.Builder()
        .setResponseModalities(Arrays.asList(ResponseModality.TEXT, ResponseModality.IMAGE))
        .build()
);

GenerativeModelFutures model = GenerativeModelFutures.from(ai);

// Provide a text prompt instructing the model to generate an image
Content prompt = new Content.Builder()
        .addText("Generate an image of the Eiffel Tower with fireworks in the background.")
        .build();

// To generate an image, call `generateContent` with the text input
ListenableFuture<GenerateContentResponse> response = model.generateContent(prompt);
Futures.addCallback(response, new FutureCallback<GenerateContentResponse>() {
    @Override
    public void onSuccess(GenerateContentResponse result) { 
        // iterate over all the parts in the first candidate in the result object
        for (Part part : result.getCandidates().get(0).getContent().getParts()) {
            if (part instanceof ImagePart) {
                ImagePart imagePart = (ImagePart) part;
                // The returned image as a bitmap
                Bitmap generatedImageAsBitmap = imagePart.getImage();
                break;
            }
        }
    }

    @Override
    public void onFailure(Throwable t) {
        t.printStackTrace();
    }
}, executor);

Web


import { initializeApp } from "firebase/app";
import { getAI, getGenerativeModel, GoogleAIBackend, ResponseModality } from "firebase/ai";

// TODO(developer) Replace the following with your app's Firebase configuration
// See: https://firebase.google.com/docs/web/learn-more#config-object
const firebaseConfig = {
  // ...
};

// Initialize FirebaseApp
const firebaseApp = initializeApp(firebaseConfig);

// Initialize the Gemini Developer API backend service
const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });

// Create a `GenerativeModel` instance with a model that supports your use case
const model = getGenerativeModel(ai, {
  model: "gemini-2.0-flash-preview-image-generation",
  // Configure the model to respond with text and images
  generationConfig: {
    responseModalities: [ResponseModality.TEXT, ResponseModality.IMAGE],
  },
});

// Provide a text prompt instructing the model to generate an image
const prompt = 'Generate an image of the Eiffel Tower with fireworks in the background.';

// To generate an image, call `generateContent` with the text input
const result = model.generateContent(prompt);

// Handle the generated image
try {
  const inlineDataParts = result.response.inlineDataParts();
  if (inlineDataParts?.[0]) {
    const image = inlineDataParts[0].inlineData;
    console.log(image.mimeType, image.data);
  }
} catch (err) {
  console.error('Prompt or candidate was blocked:', err);
}

Dart


import 'package:firebase_ai/firebase_ai.dart';
import 'package:firebase_core/firebase_core.dart';
import 'firebase_options.dart';

await Firebase.initializeApp(
  options: DefaultFirebaseOptions.currentPlatform,
);

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
final model = FirebaseAI.googleAI().generativeModel(
  model: 'gemini-2.0-flash-preview-image-generation',
  // Configure the model to respond with text and images
  generationConfig: GenerationConfig(responseModalities: [ResponseModality.text, ResponseModality.image]),
);

// Provide a text prompt instructing the model to generate an image
final prompt = [Content.text('Generate an image of the Eiffel Tower with fireworks in the background.')];

// To generate an image, call `generateContent` with the text input
final response = await model.generateContent(prompt);
if (response.inlineDataParts.isNotEmpty) {
  final imageBytes = response.inlineDataParts[0].bytes;
  // Process the image
} else {
  // Handle the case where no images were generated
  print('Error: No images were generated.');
}

Unity


using Firebase;
using Firebase.AI;

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
var model = FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI()).GetGenerativeModel(
  modelName: "gemini-2.0-flash-preview-image-generation",
  // Configure the model to respond with text and images
  generationConfig: new GenerationConfig(
    responseModalities: new[] { ResponseModality.Text, ResponseModality.Image })
);

// Provide a text prompt instructing the model to generate an image
var prompt = "Generate an image of the Eiffel Tower with fireworks in the background.";

// To generate an image, call `GenerateContentAsync` with the text input
var response = await model.GenerateContentAsync(prompt);

var text = response.Text;
if (!string.IsNullOrWhiteSpace(text)) {
  // Do something with the text
}

// Handle the generated image
var imageParts = response.Candidates.First().Content.Parts
                         .OfType<ModelContent.InlineDataPart>()
                         .Where(part => part.MimeType == "image/png");
foreach (var imagePart in imageParts) {
  // Load the Image into a Unity Texture2D object
  UnityEngine.Texture2D texture2D = new(2, 2);
  if (texture2D.LoadImage(imagePart.Data.ToArray())) {
    // Do something with the image
  }
}

Saiba como escolher um modelo adequado para seu caso de uso e app.

Gerar imagens e texto intercalados

Antes de testar este exemplo, conclua a seção Antes de começar deste guia para configurar o projeto e o app.
Nessa seção, você também clicará em um botão do provedor Gemini API escolhido para acessar o conteúdo específico do provedor nessa página.

Você pode pedir a um modelo Gemini para gerar imagens intercaladas com as respostas de texto. Por exemplo, é possível gerar imagens de como cada etapa de uma receita gerada pode ficar, junto com as instruções da etapa, e você não precisa fazer solicitações separadas para o modelo ou modelos diferentes.

Crie uma instância GenerativeModel, inclua responseModalities: ["TEXT", "IMAGE"] na configuração do modelo e chame generateContent.

Swift


import FirebaseAI

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
let generativeModel = FirebaseAI.firebaseAI(backend: .googleAI()).generativeModel(
  modelName: "gemini-2.0-flash-preview-image-generation",
  // Configure the model to respond with text and images
  generationConfig: GenerationConfig(responseModalities: [.text, .image])
)

// Provide a text prompt instructing the model to generate interleaved text and images
let prompt = """
Generate an illustrated recipe for a paella.
Create images to go alongside the text as you generate the recipe
"""

// To generate interleaved text and images, call `generateContent` with the text input
let response = try await model.generateContent(prompt)

// Handle the generated text and image
guard let candidate = response.candidates.first else {
  fatalError("No candidates in response.")
}
for part in candidate.content.parts {
  switch part {
  case let textPart as TextPart:
    // Do something with the generated text
    let text = textPart.text
  case let inlineDataPart as InlineDataPart:
    // Do something with the generated image
    guard let uiImage = UIImage(data: inlineDataPart.data) else {
      fatalError("Failed to convert data to UIImage.")
    }
  default:
    fatalError("Unsupported part type: \(part)")
  }
}

Kotlin


// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
    modelName = "gemini-2.0-flash-preview-image-generation",
    // Configure the model to respond with text and images
    generationConfig = generationConfig {
responseModalities = listOf(ResponseModality.TEXT, ResponseModality.IMAGE) }
)

// Provide a text prompt instructing the model to generate interleaved text and images
val prompt = """
    Generate an illustrated recipe for a paella.
    Create images to go alongside the text as you generate the recipe
    """.trimIndent()

// To generate interleaved text and images, call `generateContent` with the text input
val responseContent = model.generateContent(prompt).candidates.first().content

// The response will contain image and text parts interleaved
for (part in responseContent.parts) {
    when (part) {
        is ImagePart -> {
            // ImagePart as a bitmap
            val generatedImageAsBitmap: Bitmap? = part.asImageOrNull()
        }
        is TextPart -> {
            // Text content from the TextPart
            val text = part.text
        }
    }
}

Java


// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
GenerativeModel ai = FirebaseAI.getInstance(GenerativeBackend.googleAI()).generativeModel(
    "gemini-2.0-flash-preview-image-generation",
    // Configure the model to respond with text and images
    new GenerationConfig.Builder()
        .setResponseModalities(Arrays.asList(ResponseModality.TEXT, ResponseModality.IMAGE))
        .build()
);

GenerativeModelFutures model = GenerativeModelFutures.from(ai);

// Provide a text prompt instructing the model to generate interleaved text and images
Content prompt = new Content.Builder()
        .addText("Generate an illustrated recipe for a paella.\n" +
                 "Create images to go alongside the text as you generate the recipe")
        .build();

// To generate interleaved text and images, call `generateContent` with the text input
ListenableFuture<GenerateContentResponse> response = model.generateContent(prompt);
Futures.addCallback(response, new FutureCallback<GenerateContentResponse>() {
    @Override
    public void onSuccess(GenerateContentResponse result) {
        Content responseContent = result.getCandidates().get(0).getContent();
        // The response will contain image and text parts interleaved
        for (Part part : responseContent.getParts()) {
            if (part instanceof ImagePart) {
                // ImagePart as a bitmap
                Bitmap generatedImageAsBitmap = ((ImagePart) part).getImage();
            } else if (part instanceof TextPart){
                // Text content from the TextPart
                String text = ((TextPart) part).getText();
            }
        }
    }

    @Override
    public void onFailure(Throwable t) {
        System.err.println(t);
    }
}, executor);

Web


import { initializeApp } from "firebase/app";
import { getAI, getGenerativeModel, GoogleAIBackend, ResponseModality } from "firebase/ai";

// TODO(developer) Replace the following with your app's Firebase configuration
// See: https://firebase.google.com/docs/web/learn-more#config-object
const firebaseConfig = {
  // ...
};

// Initialize FirebaseApp
const firebaseApp = initializeApp(firebaseConfig);

// Initialize the Gemini Developer API backend service
const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });

// Create a `GenerativeModel` instance with a model that supports your use case
const model = getGenerativeModel(ai, {
  model: "gemini-2.0-flash-preview-image-generation",
  // Configure the model to respond with text and images
  generationConfig: {
    responseModalities: [ResponseModality.TEXT, ResponseModality.IMAGE],
  },
});

// Provide a text prompt instructing the model to generate interleaved text and images
const prompt = 'Generate an illustrated recipe for a paella.\n.' +
  'Create images to go alongside the text as you generate the recipe';

// To generate interleaved text and images, call `generateContent` with the text input
const result = await model.generateContent(prompt);

// Handle the generated text and image
try {
  const response = result.response;
  if (response.candidates?.[0].content?.parts) {
    for (const part of response.candidates?.[0].content?.parts) {
      if (part.text) {
        // Do something with the text
        console.log(part.text)
      }
      if (part.inlineData) {
        // Do something with the image
        const image = part.inlineData;
        console.log(image.mimeType, image.data);
      }
    }
  }

} catch (err) {
  console.error('Prompt or candidate was blocked:', err);
}

Dart


import 'package:firebase_ai/firebase_ai.dart';
import 'package:firebase_core/firebase_core.dart';
import 'firebase_options.dart';

await Firebase.initializeApp(
  options: DefaultFirebaseOptions.currentPlatform,
);

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
final model = FirebaseAI.googleAI().generativeModel(
  model: 'gemini-2.0-flash-preview-image-generation',
  // Configure the model to respond with text and images
  generationConfig: GenerationConfig(responseModalities: [ResponseModality.text, ResponseModality.image]),
);

// Provide a text prompt instructing the model to generate interleaved text and images
final prompt = [Content.text(
  'Generate an illustrated recipe for a paella\n ' +
  'Create images to go alongside the text as you generate the recipe'
)];

// To generate interleaved text and images, call `generateContent` with the text input
final response = await model.generateContent(prompt);

// Handle the generated text and image
final parts = response.candidates.firstOrNull?.content.parts
if (parts.isNotEmpty) {
  for (final part in parts) {
    if (part is TextPart) {
      // Do something with text part
      final text = part.text
    }
    if (part is InlineDataPart) {
      // Process image
      final imageBytes = part.bytes
    }
  }
} else {
  // Handle the case where no images were generated
  print('Error: No images were generated.');
}

Unity


using Firebase;
using Firebase.AI;

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
var model = FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI()).GetGenerativeModel(
  modelName: "gemini-2.0-flash-preview-image-generation",
  // Configure the model to respond with text and images
  generationConfig: new GenerationConfig(
    responseModalities: new[] { ResponseModality.Text, ResponseModality.Image })
);

// Provide a text prompt instructing the model to generate interleaved text and images
var prompt = "Generate an illustrated recipe for a paella \n" +
  "Create images to go alongside the text as you generate the recipe";

// To generate interleaved text and images, call `GenerateContentAsync` with the text input
var response = await model.GenerateContentAsync(prompt);

// Handle the generated text and image
foreach (var part in response.Candidates.First().Content.Parts) {
  if (part is ModelContent.TextPart textPart) {
    if (!string.IsNullOrWhiteSpace(textPart.Text)) {
      // Do something with the text
    }
  } else if (part is ModelContent.InlineDataPart dataPart) {
    if (dataPart.MimeType == "image/png") {
      // Load the Image into a Unity Texture2D object
      UnityEngine.Texture2D texture2D = new(2, 2);
      if (texture2D.LoadImage(dataPart.Data.ToArray())) {
        // Do something with the image
      }
    }
  }
}

Saiba como escolher um modelo adequado para seu caso de uso e app.

Editar imagens (entrada de texto e imagem)

Antes de testar este exemplo, conclua a seção Antes de começar deste guia para configurar o projeto e o app.
Nessa seção, você também clicará em um botão do provedor Gemini API escolhido para acessar o conteúdo específico do provedor nessa página.

É possível pedir que um modelo Gemini edite imagens solicitando texto e uma ou mais imagens.

Crie uma instância GenerativeModel, inclua responseModalities: ["TEXT", "IMAGE"] na configuração do modelo e chame generateContent.

Swift


import FirebaseAI

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
let generativeModel = FirebaseAI.firebaseAI(backend: .googleAI()).generativeModel(
  modelName: "gemini-2.0-flash-preview-image-generation",
  // Configure the model to respond with text and images
  generationConfig: GenerationConfig(responseModalities: [.text, .image])
)

// Provide an image for the model to edit
guard let image = UIImage(named: "scones") else { fatalError("Image file not found.") }

// Provide a text prompt instructing the model to edit the image
let prompt = "Edit this image to make it look like a cartoon"

// To edit the image, call `generateContent` with the image and text input
let response = try await model.generateContent(image, prompt)

// Handle the generated image
guard let inlineDataPart = response.inlineDataParts.first else {
  fatalError("No image data in response.")
}
guard let uiImage = UIImage(data: inlineDataPart.data) else {
  fatalError("Failed to convert data to UIImage.")
}

Kotlin


// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
    modelName = "gemini-2.0-flash-preview-image-generation",
    // Configure the model to respond with text and images
    generationConfig = generationConfig {
responseModalities = listOf(ResponseModality.TEXT, ResponseModality.IMAGE) }
)

// Provide an image for the model to edit
val bitmap = BitmapFactory.decodeResource(context.resources, R.drawable.scones)

// Provide a text prompt instructing the model to edit the image
val prompt = content {
    image(bitmap)
    text("Edit this image to make it look like a cartoon")
}

// To edit the image, call `generateContent` with the prompt (image and text input)
val generatedImageAsBitmap = model.generateContent(prompt)
    // Handle the generated text and image
    .candidates.first().content.parts.firstNotNullOf { it.asImageOrNull() }

Java


// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
GenerativeModel ai = FirebaseAI.getInstance(GenerativeBackend.googleAI()).generativeModel(
    "gemini-2.0-flash-preview-image-generation",
    // Configure the model to respond with text and images
    new GenerationConfig.Builder()
        .setResponseModalities(Arrays.asList(ResponseModality.TEXT, ResponseModality.IMAGE))
        .build()
);

GenerativeModelFutures model = GenerativeModelFutures.from(ai);

// Provide an image for the model to edit
Bitmap bitmap = BitmapFactory.decodeResource(resources, R.drawable.scones);

// Provide a text prompt instructing the model to edit the image
Content promptcontent = new Content.Builder()
        .addImage(bitmap)
        .addText("Edit this image to make it look like a cartoon")
        .build();

// To edit the image, call `generateContent` with the prompt (image and text input)
ListenableFuture<GenerateContentResponse> response = model.generateContent(promptcontent);
Futures.addCallback(response, new FutureCallback<GenerateContentResponse>() {
    @Override
    public void onSuccess(GenerateContentResponse result) {
        // iterate over all the parts in the first candidate in the result object
        for (Part part : result.getCandidates().get(0).getContent().getParts()) {
            if (part instanceof ImagePart) {
                ImagePart imagePart = (ImagePart) part;
                Bitmap generatedImageAsBitmap = imagePart.getImage();
                break;
            }
        }
    }

    @Override
    public void onFailure(Throwable t) {
        t.printStackTrace();
    }
}, executor);

Web


import { initializeApp } from "firebase/app";
import { getAI, getGenerativeModel, GoogleAIBackend, ResponseModality } from "firebase/ai";

// TODO(developer) Replace the following with your app's Firebase configuration
// See: https://firebase.google.com/docs/web/learn-more#config-object
const firebaseConfig = {
  // ...
};

// Initialize FirebaseApp
const firebaseApp = initializeApp(firebaseConfig);

// Initialize the Gemini Developer API backend service
const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });

// Create a `GenerativeModel` instance with a model that supports your use case
const model = getGenerativeModel(ai, {
  model: "gemini-2.0-flash-preview-image-generation",
  // Configure the model to respond with text and images
  generationConfig: {
    responseModalities: [ResponseModality.TEXT, ResponseModality.IMAGE],
  },
});

// Prepare an image for the model to edit
async function fileToGenerativePart(file) {
  const base64EncodedDataPromise = new Promise((resolve) => {
    const reader = new FileReader();
    reader.onloadend = () => resolve(reader.result.split(',')[1]);
    reader.readAsDataURL(file);
  });
  return {
    inlineData: { data: await base64EncodedDataPromise, mimeType: file.type },
  };
}

// Provide a text prompt instructing the model to edit the image
const prompt = "Edit this image to make it look like a cartoon";

const fileInputEl = document.querySelector("input[type=file]");
const imagePart = await fileToGenerativePart(fileInputEl.files[0]);

// To edit the image, call `generateContent` with the image and text input
const result = await model.generateContent([prompt, imagePart]);

// Handle the generated image
try {
  const inlineDataParts = result.response.inlineDataParts();
  if (inlineDataParts?.[0]) {
    const image = inlineDataParts[0].inlineData;
    console.log(image.mimeType, image.data);
  }
} catch (err) {
  console.error('Prompt or candidate was blocked:', err);
}

Dart


import 'package:firebase_ai/firebase_ai.dart';
import 'package:firebase_core/firebase_core.dart';
import 'firebase_options.dart';

await Firebase.initializeApp(
  options: DefaultFirebaseOptions.currentPlatform,
);

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
final model = FirebaseAI.googleAI().generativeModel(
  model: 'gemini-2.0-flash-preview-image-generation',
  // Configure the model to respond with text and images
  generationConfig: GenerationConfig(responseModalities: [ResponseModality.text, ResponseModality.image]),
);

// Prepare an image for the model to edit
final image = await File('scones.jpg').readAsBytes();
final imagePart = InlineDataPart('image/jpeg', image);

// Provide a text prompt instructing the model to edit the image
final prompt = TextPart("Edit this image to make it look like a cartoon");

// To edit the image, call `generateContent` with the image and text input
final response = await model.generateContent([
  Content.multi([prompt,imagePart])
]);

// Handle the generated image
if (response.inlineDataParts.isNotEmpty) {
  final imageBytes = response.inlineDataParts[0].bytes;
  // Process the image
} else {
  // Handle the case where no images were generated
  print('Error: No images were generated.');
}

Unity


using Firebase;
using Firebase.AI;

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
var model = FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI()).GetGenerativeModel(
  modelName: "gemini-2.0-flash-preview-image-generation",
  // Configure the model to respond with text and images
  generationConfig: new GenerationConfig(
    responseModalities: new[] { ResponseModality.Text, ResponseModality.Image })
);

// Prepare an image for the model to edit
var imageFile = System.IO.File.ReadAllBytes(System.IO.Path.Combine(
  UnityEngine.Application.streamingAssetsPath, "scones.jpg"));
var image = ModelContent.InlineData("image/jpeg", imageFile);

// Provide a text prompt instructing the model to edit the image
var prompt = ModelContent.Text("Edit this image to make it look like a cartoon.");

// To edit the image, call `GenerateContent` with the image and text input
var response = await model.GenerateContentAsync(new [] { prompt, image });

var text = response.Text;
if (!string.IsNullOrWhiteSpace(text)) {
  // Do something with the text
}

// Handle the generated image
var imageParts = response.Candidates.First().Content.Parts
                         .OfType<ModelContent.InlineDataPart>()
                         .Where(part => part.MimeType == "image/png");
foreach (var imagePart in imageParts) {
  // Load the Image into a Unity Texture2D object
  Texture2D texture2D = new Texture2D(2, 2);
  if (texture2D.LoadImage(imagePart.Data.ToArray())) {
    // Do something with the image
  }
}

Saiba como escolher um modelo adequado para seu caso de uso e app.

Iterar e editar imagens usando o chat multiturno

Antes de testar este exemplo, conclua a seção Antes de começar deste guia para configurar o projeto e o app.
Nessa seção, você também clicará em um botão do provedor Gemini API escolhido para acessar o conteúdo específico do provedor nessa página.

Com o chat com várias interações, é possível iterar com um modelo Gemini nas imagens geradas ou fornecidas.

Crie uma instância GenerativeModel, inclua responseModalities: ["TEXT", "IMAGE"] na configuração do modelo e chame startChat() e sendMessage() para enviar novas mensagens ao usuário.

Swift


import FirebaseAI

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
let generativeModel = FirebaseAI.firebaseAI(backend: .googleAI()).generativeModel(
  modelName: "gemini-2.0-flash-preview-image-generation",
  // Configure the model to respond with text and images
  generationConfig: GenerationConfig(responseModalities: [.text, .image])
)

// Initialize the chat
let chat = model.startChat()

guard let image = UIImage(named: "scones") else { fatalError("Image file not found.") }

// Provide an initial text prompt instructing the model to edit the image
let prompt = "Edit this image to make it look like a cartoon"

// To generate an initial response, send a user message with the image and text prompt
let response = try await chat.sendMessage(image, prompt)

// Inspect the generated image
guard let inlineDataPart = response.inlineDataParts.first else {
  fatalError("No image data in response.")
}
guard let uiImage = UIImage(data: inlineDataPart.data) else {
  fatalError("Failed to convert data to UIImage.")
}

// Follow up requests do not need to specify the image again
let followUpResponse = try await chat.sendMessage("But make it old-school line drawing style")

// Inspect the edited image after the follow up request
guard let followUpInlineDataPart = followUpResponse.inlineDataParts.first else {
  fatalError("No image data in response.")
}
guard let followUpUIImage = UIImage(data: followUpInlineDataPart.data) else {
  fatalError("Failed to convert data to UIImage.")
}

Kotlin


// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
    modelName = "gemini-2.0-flash-preview-image-generation",
    // Configure the model to respond with text and images
    generationConfig = generationConfig {
responseModalities = listOf(ResponseModality.TEXT, ResponseModality.IMAGE) }
)

// Provide an image for the model to edit
val bitmap = BitmapFactory.decodeResource(context.resources, R.drawable.scones)

// Create the initial prompt instructing the model to edit the image
val prompt = content {
    image(bitmap)
    text("Edit this image to make it look like a cartoon")
}

// Initialize the chat
val chat = model.startChat()

// To generate an initial response, send a user message with the image and text prompt
var response = chat.sendMessage(prompt)
// Inspect the returned image
var generatedImageAsBitmap = response
    .candidates.first().content.parts.firstNotNullOf { it.asImageOrNull() }

// Follow up requests do not need to specify the image again
response = chat.sendMessage("But make it old-school line drawing style")
generatedImageAsBitmap = response
    .candidates.first().content.parts.firstNotNullOf { it.asImageOrNull() }

Java


// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
GenerativeModel ai = FirebaseAI.getInstance(GenerativeBackend.googleAI()).generativeModel(
    "gemini-2.0-flash-preview-image-generation",
    // Configure the model to respond with text and images
    new GenerationConfig.Builder()
        .setResponseModalities(Arrays.asList(ResponseModality.TEXT, ResponseModality.IMAGE))
        .build()
);

GenerativeModelFutures model = GenerativeModelFutures.from(ai);

// Provide an image for the model to edit
Bitmap bitmap = BitmapFactory.decodeResource(resources, R.drawable.scones);

// Initialize the chat
ChatFutures chat = model.startChat();

// Create the initial prompt instructing the model to edit the image
Content prompt = new Content.Builder()
        .setRole("user")
        .addImage(bitmap)
        .addText("Edit this image to make it look like a cartoon")
        .build();

// To generate an initial response, send a user message with the image and text prompt
ListenableFuture<GenerateContentResponse> response = chat.sendMessage(prompt);
// Extract the image from the initial response
ListenableFuture<@Nullable Bitmap> initialRequest = Futures.transform(response, result -> {
    for (Part part : result.getCandidates().get(0).getContent().getParts()) {
        if (part instanceof ImagePart) {
            ImagePart imagePart = (ImagePart) part;
            return imagePart.getImage();
        }
    }
    return null;
}, executor);

// Follow up requests do not need to specify the image again
ListenableFuture<GenerateContentResponse> modelResponseFuture = Futures.transformAsync(
        initialRequest,
        generatedImage -> {
            Content followUpPrompt = new Content.Builder()
                    .addText("But make it old-school line drawing style")
                    .build();
            return chat.sendMessage(followUpPrompt);
        },
        executor);

// Add a final callback to check the reworked image
Futures.addCallback(modelResponseFuture, new FutureCallback<GenerateContentResponse>() {
    @Override
    public void onSuccess(GenerateContentResponse result) {
        for (Part part : result.getCandidates().get(0).getContent().getParts()) {
            if (part instanceof ImagePart) {
                ImagePart imagePart = (ImagePart) part;
                Bitmap generatedImageAsBitmap = imagePart.getImage();
                break;
            }
        }
    }

    @Override
    public void onFailure(Throwable t) {
        t.printStackTrace();
    }
}, executor);

Web


import { initializeApp } from "firebase/app";
import { getAI, getGenerativeModel, GoogleAIBackend, ResponseModality } from "firebase/ai";

// TODO(developer) Replace the following with your app's Firebase configuration
// See: https://firebase.google.com/docs/web/learn-more#config-object
const firebaseConfig = {
  // ...
};

// Initialize FirebaseApp
const firebaseApp = initializeApp(firebaseConfig);

// Initialize the Gemini Developer API backend service
const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });

// Create a `GenerativeModel` instance with a model that supports your use case
const model = getGenerativeModel(ai, {
  model: "gemini-2.0-flash-preview-image-generation",
  // Configure the model to respond with text and images
  generationConfig: {
    responseModalities: [ResponseModality.TEXT, ResponseModality.IMAGE],
  },
});

// Prepare an image for the model to edit
async function fileToGenerativePart(file) {
  const base64EncodedDataPromise = new Promise((resolve) => {
    const reader = new FileReader();
    reader.onloadend = () => resolve(reader.result.split(',')[1]);
    reader.readAsDataURL(file);
  });
  return {
    inlineData: { data: await base64EncodedDataPromise, mimeType: file.type },
  };
}

const fileInputEl = document.querySelector("input[type=file]");
const imagePart = await fileToGenerativePart(fileInputEl.files[0]);

// Provide an initial text prompt instructing the model to edit the image
const prompt = "Edit this image to make it look like a cartoon";

// Initialize the chat
const chat = model.startChat();

// To generate an initial response, send a user message with the image and text prompt
const result = await chat.sendMessage([prompt, imagePart]);

// Request and inspect the generated image
try {
  const inlineDataParts = result.response.inlineDataParts();
  if (inlineDataParts?.[0]) {
    // Inspect the generated image
    const image = inlineDataParts[0].inlineData;
    console.log(image.mimeType, image.data);
  }
} catch (err) {
  console.error('Prompt or candidate was blocked:', err);
}

// Follow up requests do not need to specify the image again
const followUpResult = await chat.sendMessage("But make it old-school line drawing style");

// Request and inspect the returned image
try {
  const followUpInlineDataParts = followUpResult.response.inlineDataParts();
  if (followUpInlineDataParts?.[0]) {
    // Inspect the generated image
    const followUpImage = followUpInlineDataParts[0].inlineData;
    console.log(followUpImage.mimeType, followUpImage.data);
  }
} catch (err) {
  console.error('Prompt or candidate was blocked:', err);
}

Dart


import 'package:firebase_ai/firebase_ai.dart';
import 'package:firebase_core/firebase_core.dart';
import 'firebase_options.dart';

await Firebase.initializeApp(
  options: DefaultFirebaseOptions.currentPlatform,
);

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
final model = FirebaseAI.googleAI().generativeModel(
  model: 'gemini-2.0-flash-preview-image-generation',
  // Configure the model to respond with text and images
  generationConfig: GenerationConfig(responseModalities: [ResponseModality.text, ResponseModality.image]),
);

// Prepare an image for the model to edit
final image = await File('scones.jpg').readAsBytes();
final imagePart = InlineDataPart('image/jpeg', image);

// Provide an initial text prompt instructing the model to edit the image
final prompt = TextPart("Edit this image to make it look like a cartoon");

// Initialize the chat
final chat = model.startChat();

// To generate an initial response, send a user message with the image and text prompt
final response = await chat.sendMessage([
  Content.multi([prompt,imagePart])
]);

// Inspect the returned image
if (response.inlineDataParts.isNotEmpty) {
  final imageBytes = response.inlineDataParts[0].bytes;
  // Process the image
} else {
  // Handle the case where no images were generated
  print('Error: No images were generated.');
}

// Follow up requests do not need to specify the image again
final followUpResponse = await chat.sendMessage([
  Content.text("But make it old-school line drawing style")
]);

// Inspect the returned image
if (followUpResponse.inlineDataParts.isNotEmpty) {
  final followUpImageBytes = response.inlineDataParts[0].bytes;
  // Process the image
} else {
  // Handle the case where no images were generated
  print('Error: No images were generated.');
}

Unity


using Firebase;
using Firebase.AI;

// Initialize the Gemini Developer API backend service
// Create a `GenerativeModel` instance with a Gemini model that supports image output
var model = FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI()).GetGenerativeModel(
  modelName: "gemini-2.0-flash-preview-image-generation",
  // Configure the model to respond with text and images
  generationConfig: new GenerationConfig(
    responseModalities: new[] { ResponseModality.Text, ResponseModality.Image })
);

// Prepare an image for the model to edit
var imageFile = System.IO.File.ReadAllBytes(System.IO.Path.Combine(
  UnityEngine.Application.streamingAssetsPath, "scones.jpg"));
var image = ModelContent.InlineData("image/jpeg", imageFile);

// Provide an initial text prompt instructing the model to edit the image
var prompt = ModelContent.Text("Edit this image to make it look like a cartoon.");

// Initialize the chat
var chat = model.StartChat();

// To generate an initial response, send a user message with the image and text prompt
var response = await chat.SendMessageAsync(new [] { prompt, image });

// Inspect the returned image
var imageParts = response.Candidates.First().Content.Parts
                         .OfType<ModelContent.InlineDataPart>()
                         .Where(part => part.MimeType == "image/png");
// Load the image into a Unity Texture2D object
UnityEngine.Texture2D texture2D = new(2, 2);
if (texture2D.LoadImage(imageParts.First().Data.ToArray())) {
  // Do something with the image
}

// Follow up requests do not need to specify the image again
var followUpResponse = await chat.SendMessageAsync("But make it old-school line drawing style");

// Inspect the returned image
var followUpImageParts = followUpResponse.Candidates.First().Content.Parts
                         .OfType<ModelContent.InlineDataPart>()
                         .Where(part => part.MimeType == "image/png");
// Load the image into a Unity Texture2D object
UnityEngine.Texture2D followUpTexture2D = new(2, 2);
if (followUpTexture2D.LoadImage(followUpImageParts.First().Data.ToArray())) {
  // Do something with the image
}

Saiba como escolher um modelo adequado para seu caso de uso e app.



Recursos compatíveis, limitações e práticas recomendadas

Modalidades e recursos compatíveis

Confira a seguir as modalidades e os recursos compatíveis com a saída de imagem de um modelo Gemini. Cada capacidade mostra um exemplo de comando e tem um exemplo de código acima.

  • Texto para imagem (somente texto para imagem)

    • Gerar uma imagem da Torre Eiffel com fogos de artifício ao fundo.
  • Texto para imagem (renderização de texto)

    • Gerar uma foto cinematográfica de um grande edifício com essa projeção de texto gigante mapeada na frente do edifício.
  • Texto para imagens e texto (intercalado)

    • Gerar uma receita ilustrada de paella. Crie imagens ao lado do texto ao gerar a receita.

    • Crie uma história sobre um cachorro em um estilo de animação 3D. Para cada cena, gere uma imagem.

  • Imagens e texto para imagens e texto (intercalado)

    • [imagem de um quarto mobiliado] + Quais outras cores de sofás funcionariam no meu espaço? Você pode atualizar a imagem?
  • Edição de imagens (texto e imagem para imagem)

    • [image of scones] + Edit this image to make it look like a cartoon

    • [image of a cat] + [image of a pillow] + Crie um ponto cruz do meu gato neste travessseiro.

  • Edição de imagens com vários turnos (chat)

    • [imagem de um carro azul] + Transforme este carro em um conversível. Depois, mude a cor para amarelo.

Limitações e práticas recomendadas

Confira a seguir as limitações e práticas recomendadas para a saída de imagem de um modelo Gemini.

  • Nesta versão experimental pública, Gemini oferece suporte para o seguinte:

    • Gerar imagens PNG com uma dimensão máxima de 1.024 pixels.
    • Gerar e editar imagens de pessoas.
    • Usar filtros de segurança que oferecem uma experiência de usuário flexível e menos restritiva.
  • Para ter a melhor performance, use os seguintes idiomas: en, es-mx, ja-jp, zh-cn, hi-in.

  • A geração de imagens não tem suporte a entradas de áudio ou vídeo.

  • A geração de imagens nem sempre é acionada. Estes são alguns problemas conhecidos:

    • O modelo pode gerar apenas texto.
      Peça explicitamente as saídas de imagem (por exemplo, "gerar uma imagem", "fornecer imagens conforme você avança", "atualizar a imagem").

    • O modelo pode parar de gerar no meio do processo.
      Tente de novo ou use outro comando.

    • O modelo pode gerar texto como uma imagem.
      Peça explicitamente as saídas de texto. Por exemplo, "gerar texto narrativo com ilustrações".

  • Ao gerar texto para uma imagem, o Gemini funciona melhor se você primeiro gerar o texto e depois solicitar uma imagem com o texto.