Response from calling GenerativeModel.countTokens().
Signature:
export interface CountTokensResponse
Properties
Property | Type | Description |
---|---|---|
promptTokensDetails | ModalityTokenCount[] | The breakdown, by modality, of how many tokens are consumed by the prompt. |
totalBillableCharacters | number | The total number of billable characters counted across all instances from the request.This property is only supported when using the Vertex AI Gemini API (VertexAIBackend). When using the Gemini Developer API (GoogleAIBackend), this property is not supported and will default to 0. |
totalTokens | number | The total number of tokens counted across all instances from the request. |
CountTokensResponse.promptTokensDetails
The breakdown, by modality, of how many tokens are consumed by the prompt.
Signature:
promptTokensDetails?: ModalityTokenCount[];
CountTokensResponse.totalBillableCharacters
The total number of billable characters counted across all instances from the request.
This property is only supported when using the Vertex AI Gemini API (VertexAIBackend). When using the Gemini Developer API (GoogleAIBackend), this property is not supported and will default to 0.
Signature:
totalBillableCharacters?: number;
CountTokensResponse.totalTokens
The total number of tokens counted across all instances from the request.
Signature:
totalTokens: number;