Firebase. AI. SafetyRating
A type defining potentially harmful media categories and their model-assigned ratings.
Summary
A value of this type may be assigned to a category for every model-generated response, not just responses that exceed a certain threshold.
Public types |
|
---|---|
HarmProbability{
|
enum The probability that a given model output falls under a harmful content category. |
HarmSeverity{
|
enum The magnitude of how harmful a model response might be for the respective HarmCategory . |
Properties |
|
---|---|
Blocked
|
bool
If true, the response was blocked.
|
Category
|
The category describing the potential harm a piece of content may pose.
|
Probability
|
The model-generated probability that the content falls under the specified HarmCategory.
|
ProbabilityScore
|
float
The confidence score that the response is associated with the corresponding HarmCategory.
|
Severity
|
The severity reflects the magnitude of how harmful a model response might be.
|
SeverityScore
|
float
The severity score is the magnitude of how harmful a model response might be.
|
Public types
HarmProbability
Firebase::AI::SafetyRating::HarmProbability
The probability that a given model output falls under a harmful content category.
Note: This does not indicate the severity of harm for a piece of content.
HarmSeverity
Firebase::AI::SafetyRating::HarmSeverity
Properties
Blocked
bool Firebase::AI::SafetyRating::Blocked
If true, the response was blocked.
Category
HarmCategory Firebase::AI::SafetyRating::Category
The category describing the potential harm a piece of content may pose.
Probability
HarmProbability Firebase::AI::SafetyRating::Probability
The model-generated probability that the content falls under the specified HarmCategory.
This is a discretized representation of the ProbabilityScore
.
Important: This does not indicate the severity of harm for a piece of content.
ProbabilityScore
float Firebase::AI::SafetyRating::ProbabilityScore
The confidence score that the response is associated with the corresponding HarmCategory.
The probability safety score is a confidence score between 0.0 and 1.0, rounded to one decimal place; it is discretized into a HarmProbability
in Probability
. See probability scores in the Google Cloud documentation for more details.
Severity
HarmSeverity Firebase::AI::SafetyRating::Severity
The severity reflects the magnitude of how harmful a model response might be.
This is a discretized representation of the SeverityScore
.
SeverityScore
float Firebase::AI::SafetyRating::SeverityScore
The severity score is the magnitude of how harmful a model response might be.
The severity score ranges from 0.0 to 1.0, rounded to one decimal place; it is discretized into a HarmSeverity
in Severity
. See severity scores in the Google Cloud documentation for more details.