Firebase.AI.UsageMetadata

Token usage metadata for processing the generate content request.

Summary

Properties

CandidatesTokenCount
int
The total number of tokens across the generated response candidates.
CandidatesTokensDetails
IReadOnlyList< ModalityTokenCount >
PromptTokenCount
int
The number of tokens in the request prompt.
PromptTokensDetails
IReadOnlyList< ModalityTokenCount >
ThoughtsTokenCount
int
The number of tokens used by the model's internal "thinking" process.
TotalTokenCount
int
The total number of tokens in both the request and response.

Properties

CandidatesTokenCount

int Firebase::AI::UsageMetadata::CandidatesTokenCount

The total number of tokens across the generated response candidates.

CandidatesTokensDetails

IReadOnlyList< ModalityTokenCount > Firebase::AI::UsageMetadata::CandidatesTokensDetails

PromptTokenCount

int Firebase::AI::UsageMetadata::PromptTokenCount

The number of tokens in the request prompt.

PromptTokensDetails

IReadOnlyList< ModalityTokenCount > Firebase::AI::UsageMetadata::PromptTokensDetails

ThoughtsTokenCount

int Firebase::AI::UsageMetadata::ThoughtsTokenCount

The number of tokens used by the model's internal "thinking" process.

For models that support thinking (like Gemini 2.5 Pro and Flash), this represents the actual number of tokens consumed for reasoning before the model generated a response. For models that do not support thinking, this value will be 0.

When thinking is used, this count will be less than or equal to the thinkingBudget set in the ThinkingConfig.

TotalTokenCount

int Firebase::AI::UsageMetadata::TotalTokenCount

The total number of tokens in both the request and response.