Firebase.AI.SafetySetting

A type used to specify a threshold for harmful content, beyond which the model will return a fallback response instead of generated content.

Summary

Constructors and Destructors

SafetySetting(HarmCategory category, HarmBlockThreshold threshold, HarmBlockMethod? method)
Initializes a new safety setting with the given category and threshold.

Public types

HarmBlockMethod{
  Probability,
  Severity
}
enum
The method of computing whether the threshold has been exceeded.
HarmBlockThreshold{
  LowAndAbove,
  MediumAndAbove,
  OnlyHigh,
  None,
  Off
}
enum
Block at and beyond a specified threshold.

Public types

HarmBlockMethod

 Firebase::AI::SafetySetting::HarmBlockMethod

The method of computing whether the threshold has been exceeded.

Properties
Probability

Use only the probability score.

Severity

Use both probability and severity scores.

HarmBlockThreshold

 Firebase::AI::SafetySetting::HarmBlockThreshold

Block at and beyond a specified threshold.

Properties
LowAndAbove

Content with negligible harm is allowed.

MediumAndAbove

Content with negligible to low harm is allowed.

None

All content is allowed regardless of harm.

Off

All content is allowed regardless of harm, and metadata will not be included in the response.

OnlyHigh

Content with negligible to medium harm is allowed.

Public functions

SafetySetting

 Firebase::AI::SafetySetting::SafetySetting(
  HarmCategory category,
  HarmBlockThreshold threshold,
  HarmBlockMethod? method
)

Initializes a new safety setting with the given category and threshold.

Details
Parameters
category
The category this safety setting should be applied to.
threshold
The threshold describing what content should be blocked.
method
The method of computing whether the threshold has been exceeded; if not specified, the default method is Severity for most models. This parameter is unused in the GoogleAI backend.