Firebase.AI.SafetyRating

A type defining potentially harmful media categories and their model-assigned ratings.

Summary

A value of this type may be assigned to a category for every model-generated response, not just responses that exceed a certain threshold.

Public types

HarmProbability{
  Unknown = 0,
  Negligible,
  Low,
  Medium,
  High
}
enum
The probability that a given model output falls under a harmful content category.
HarmSeverity{
  Unknown = 0,
  Negligible,
  Low,
  Medium,
  High
}
enum
The magnitude of how harmful a model response might be for the respective HarmCategory.

Properties

Blocked
bool
If true, the response was blocked.
Category
The category describing the potential harm a piece of content may pose.
Probability
The model-generated probability that the content falls under the specified HarmCategory.
ProbabilityScore
float
The confidence score that the response is associated with the corresponding HarmCategory.
Severity
The severity reflects the magnitude of how harmful a model response might be.
SeverityScore
float
The severity score is the magnitude of how harmful a model response might be.

Public types

HarmProbability

 Firebase::AI::SafetyRating::HarmProbability

The probability that a given model output falls under a harmful content category.

Note: This does not indicate the severity of harm for a piece of content.

Properties
High

The probability is high.

The content described is very likely harmful.

Low

The probability is small but non-zero.

Medium

The probability is moderate.

Negligible

The probability is zero or close to zero.

For benign content, the probability across all categories will be this value.

Unknown

A new and not yet supported value.

HarmSeverity

 Firebase::AI::SafetyRating::HarmSeverity

The magnitude of how harmful a model response might be for the respective HarmCategory.

Properties
High

High level of harm severity.

Low

Low level of harm severity.

Medium

Medium level of harm severity.

Negligible

Negligible level of harm severity.

Unknown

A new and not yet supported value.

Properties

Blocked

bool Firebase::AI::SafetyRating::Blocked

If true, the response was blocked.

Category

HarmCategory Firebase::AI::SafetyRating::Category

The category describing the potential harm a piece of content may pose.

Probability

HarmProbability Firebase::AI::SafetyRating::Probability

The model-generated probability that the content falls under the specified HarmCategory.

This is a discretized representation of the ProbabilityScore.

Important: This does not indicate the severity of harm for a piece of content.

ProbabilityScore

float Firebase::AI::SafetyRating::ProbabilityScore

The confidence score that the response is associated with the corresponding HarmCategory.

The probability safety score is a confidence score between 0.0 and 1.0, rounded to one decimal place; it is discretized into a HarmProbability in Probability. See probability scores in the Google Cloud documentation for more details.

Severity

HarmSeverity Firebase::AI::SafetyRating::Severity

The severity reflects the magnitude of how harmful a model response might be.

This is a discretized representation of the SeverityScore.

SeverityScore

float Firebase::AI::SafetyRating::SeverityScore

The severity score is the magnitude of how harmful a model response might be.

The severity score ranges from 0.0 to 1.0, rounded to one decimal place; it is discretized into a HarmSeverity in Severity. See severity scores in the Google Cloud documentation for more details.