Firebase.AI.GenerativeModel

A type that represents a remote multimodal model (like Gemini), with the ability to generate content based on various input types.

Summary

Public functions

CountTokensAsync(ModelContent content, CancellationToken cancellationToken)
Counts the number of tokens in a prompt using the model's tokenizer.
CountTokensAsync(string text, CancellationToken cancellationToken)
Counts the number of tokens in a prompt using the model's tokenizer.
CountTokensAsync(IEnumerable< ModelContent > content, CancellationToken cancellationToken)
Counts the number of tokens in a prompt using the model's tokenizer.
GenerateContentAsync(ModelContent content, CancellationToken cancellationToken)
Generates new content from input ModelContent given to the model as a prompt.
GenerateContentAsync(string text, CancellationToken cancellationToken)
Generates new content from input text given to the model as a prompt.
GenerateContentAsync(IEnumerable< ModelContent > content, CancellationToken cancellationToken)
Generates new content from input ModelContent given to the model as a prompt.
GenerateContentStreamAsync(ModelContent content, CancellationToken cancellationToken)
IAsyncEnumerable< GenerateContentResponse >
Generates new content as a stream from input ModelContent given to the model as a prompt.
GenerateContentStreamAsync(string text, CancellationToken cancellationToken)
IAsyncEnumerable< GenerateContentResponse >
Generates new content as a stream from input text given to the model as a prompt.
GenerateContentStreamAsync(IEnumerable< ModelContent > content, CancellationToken cancellationToken)
IAsyncEnumerable< GenerateContentResponse >
Generates new content as a stream from input ModelContent given to the model as a prompt.
StartChat(params ModelContent[] history)
Creates a new chat conversation using this model with the provided history.
StartChat(IEnumerable< ModelContent > history)
Creates a new chat conversation using this model with the provided history.

Public functions

CountTokensAsync

Task< CountTokensResponse > CountTokensAsync(
  ModelContent content,
  CancellationToken cancellationToken
)

Counts the number of tokens in a prompt using the model's tokenizer.

Details
Parameters
content
The input given to the model as a prompt.
Exceptions
HttpRequestException
Thrown when an error occurs during the request.
Returns
The CountTokensResponse of running the model's tokenizer on the input.

CountTokensAsync

Task< CountTokensResponse > CountTokensAsync(
  string text,
  CancellationToken cancellationToken
)

Counts the number of tokens in a prompt using the model's tokenizer.

Details
Parameters
text
The text input given to the model as a prompt.
cancellationToken
An optional token to cancel the operation.
Exceptions
HttpRequestException
Thrown when an error occurs during the request.
Returns
The CountTokensResponse of running the model's tokenizer on the input.

CountTokensAsync

Task< CountTokensResponse > CountTokensAsync(
  IEnumerable< ModelContent > content,
  CancellationToken cancellationToken
)

Counts the number of tokens in a prompt using the model's tokenizer.

Details
Parameters
content
The input given to the model as a prompt.
cancellationToken
An optional token to cancel the operation.
Exceptions
HttpRequestException
Thrown when an error occurs during the request.
Returns
The CountTokensResponse of running the model's tokenizer on the input.

GenerateContentAsync

Task< GenerateContentResponse > GenerateContentAsync(
  ModelContent content,
  CancellationToken cancellationToken
)

Generates new content from input ModelContent given to the model as a prompt.

Details
Parameters
content
The input given to the model as a prompt.
cancellationToken
An optional token to cancel the operation.
Exceptions
HttpRequestException
Thrown when an error occurs during content generation.
Returns
The generated content response from the model.

GenerateContentAsync

Task< GenerateContentResponse > GenerateContentAsync(
  string text,
  CancellationToken cancellationToken
)

Generates new content from input text given to the model as a prompt.

Details
Parameters
text
The text given to the model as a prompt.
cancellationToken
An optional token to cancel the operation.
Exceptions
HttpRequestException
Thrown when an error occurs during content generation.
Returns
The generated content response from the model.

GenerateContentAsync

Task< GenerateContentResponse > GenerateContentAsync(
  IEnumerable< ModelContent > content,
  CancellationToken cancellationToken
)

Generates new content from input ModelContent given to the model as a prompt.

Details
Parameters
content
The input given to the model as a prompt.
cancellationToken
An optional token to cancel the operation.
Exceptions
HttpRequestException
Thrown when an error occurs during content generation.
Returns
The generated content response from the model.

GenerateContentStreamAsync

IAsyncEnumerable< GenerateContentResponse > GenerateContentStreamAsync(
  ModelContent content,
  CancellationToken cancellationToken
)

Generates new content as a stream from input ModelContent given to the model as a prompt.

Details
Parameters
content
The input given to the model as a prompt.
cancellationToken
An optional token to cancel the operation.
Exceptions
HttpRequestException
Thrown when an error occurs during content generation.
Returns
A stream of generated content responses from the model.

GenerateContentStreamAsync

IAsyncEnumerable< GenerateContentResponse > GenerateContentStreamAsync(
  string text,
  CancellationToken cancellationToken
)

Generates new content as a stream from input text given to the model as a prompt.

Details
Parameters
text
The text given to the model as a prompt.
cancellationToken
An optional token to cancel the operation.
Exceptions
HttpRequestException
Thrown when an error occurs during content generation.
Returns
A stream of generated content responses from the model.

GenerateContentStreamAsync

IAsyncEnumerable< GenerateContentResponse > GenerateContentStreamAsync(
  IEnumerable< ModelContent > content,
  CancellationToken cancellationToken
)

Generates new content as a stream from input ModelContent given to the model as a prompt.

Details
Parameters
content
The input given to the model as a prompt.
cancellationToken
An optional token to cancel the operation.
Exceptions
HttpRequestException
Thrown when an error occurs during content generation.
Returns
A stream of generated content responses from the model.

StartChat

Chat StartChat(
  params ModelContent[] history
)

Creates a new chat conversation using this model with the provided history.

Details
Parameters
history
Initial content history to start with.

StartChat

Chat StartChat(
  IEnumerable< ModelContent > history
)

Creates a new chat conversation using this model with the provided history.

Details
Parameters
history
Initial content history to start with.