Firebase.AI.SafetySetting
A type used to specify a threshold for harmful content, beyond which the model will return a fallback response instead of generated content.
Summary
Constructors and Destructors |
|
---|---|
SafetySetting(HarmCategory category, HarmBlockThreshold threshold, HarmBlockMethod? method)
Initializes a new safety setting with the given category and threshold.
|
Public types |
|
---|---|
HarmBlockMethod{
|
enum The method of computing whether the threshold has been exceeded. |
HarmBlockThreshold{
|
enum Block at and beyond a specified threshold. |
Public types
HarmBlockMethod
Firebase::AI::SafetySetting::HarmBlockMethod
The method of computing whether the threshold has been exceeded.
Properties | |
---|---|
Probability
|
Use only the probability score. |
Severity
|
Use both probability and severity scores. |
HarmBlockThreshold
Firebase::AI::SafetySetting::HarmBlockThreshold
Block at and beyond a specified threshold.
Properties | |
---|---|
LowAndAbove
|
Content with negligible harm is allowed. |
MediumAndAbove
|
Content with negligible to low harm is allowed. |
None
|
All content is allowed regardless of harm. |
Off
|
All content is allowed regardless of harm, and metadata will not be included in the response. |
OnlyHigh
|
Content with negligible to medium harm is allowed. |
Public functions
SafetySetting
Firebase::AI::SafetySetting::SafetySetting( HarmCategory category, HarmBlockThreshold threshold, HarmBlockMethod? method )
Initializes a new safety setting with the given category and threshold.
Details | |||||||
---|---|---|---|---|---|---|---|
Parameters |
|