OnDeviceConfig

@PublicPreviewAPI
class OnDeviceConfig


Configuration for on-device AI model inference.

Summary

Public companion properties

OnDeviceConfig

A default configuration that only uses in-cloud inference.

Public constructors

OnDeviceConfig(
    mode: InferenceMode,
    maxOutputTokens: Int?,
    temperature: Float?,
    topK: Int?,
    seed: Int?,
    candidateCount: Int
)

Public properties

Int

The number of generated responses to return.

Int?

The maximum number of tokens to generate in the response.

InferenceMode

The InferenceMode to use for the model.

Int?

The seed to use for generation to ensure reproducibility.

Float?

A parameter controlling the degree of randomness in token selection.

Int?

The topK parameter changes how the model selects tokens for output.

Public companion properties

IN_CLOUD

val IN_CLOUDOnDeviceConfig

A default configuration that only uses in-cloud inference.

Public constructors

OnDeviceConfig

OnDeviceConfig(
    mode: InferenceMode,
    maxOutputTokens: Int? = null,
    temperature: Float? = null,
    topK: Int? = null,
    seed: Int? = null,
    candidateCount: Int = 1
)

Public properties

candidateCount

val candidateCountInt

The number of generated responses to return. See GenerationConfig for more detail. By default it's set to 1.

maxOutputTokens

val maxOutputTokensInt?

The maximum number of tokens to generate in the response. See GenerationConfig for more detail.

mode

val modeInferenceMode

The InferenceMode to use for the model.

seed

val seedInt?

The seed to use for generation to ensure reproducibility. See GenerationConfig for more detail.

temperature

val temperatureFloat?

A parameter controlling the degree of randomness in token selection. See GenerationConfig for more detail.

topK

val topKInt?

The topK parameter changes how the model selects tokens for output. See GenerationConfig for more detail.