Anda dapat menggunakan setelan keamanan untuk menyesuaikan kemungkinan mendapatkan respons yang dapat dianggap berbahaya. Secara default, setelan keamanan memblokir konten dengan probabilitas sedang dan/atau tinggi sebagai konten tidak aman di semua dimensi.
Gemini Langsung ke setelan keamanan Imagen Langsung ke setelan keamanan
Setelan keamanan untuk model Gemini
Klik penyedia Gemini API untuk melihat konten dan kode khusus penyedia di halaman ini. |
Anda mengonfigurasi SafetySettings
saat membuat instance GenerativeModel
.
Contoh dengan satu setelan keamanan:
import FirebaseAI
// Specify the safety settings as part of creating the `GenerativeModel` instance
let model = FirebaseAI.firebaseAI(backend: .googleAI()).generativeModel(
modelName: "GEMINI_MODEL_NAME ",
safetySettings: [
SafetySetting(harmCategory: .harassment, threshold: .blockOnlyHigh)
]
)
// ...
Contoh dengan beberapa setelan keamanan:
import FirebaseAI
let harassmentSafety = SafetySetting(harmCategory: .harassment, threshold: .blockOnlyHigh)
let hateSpeechSafety = SafetySetting(harmCategory: .hateSpeech, threshold: .blockMediumAndAbove)
// Specify the safety settings as part of creating the `GenerativeModel` instance
let model = FirebaseAI.firebaseAI(backend: .googleAI()).generativeModel(
modelName: "GEMINI_MODEL_NAME ",
safetySettings: [harassmentSafety, hateSpeechSafety]
)
// ...
Anda mengonfigurasi SafetySettings
saat membuat instance GenerativeModel
.
Contoh dengan satu setelan keamanan:
import com.google.firebase.vertexai.type.HarmBlockThreshold
import com.google.firebase.vertexai.type.HarmCategory
import com.google.firebase.vertexai.type.SafetySetting
// Specify the safety settings as part of creating the `GenerativeModel` instance
val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
modelName = "GEMINI_MODEL_NAME ",
safetySettings = listOf(
SafetySetting(HarmCategory.HARASSMENT, HarmBlockThreshold.ONLY_HIGH)
)
)
// ...
Contoh dengan beberapa setelan keamanan:
import com.google.firebase.vertexai.type.HarmBlockThreshold
import com.google.firebase.vertexai.type.HarmCategory
import com.google.firebase.vertexai.type.SafetySetting
val harassmentSafety = SafetySetting(HarmCategory.HARASSMENT, HarmBlockThreshold.ONLY_HIGH)
val hateSpeechSafety = SafetySetting(HarmCategory.HATE_SPEECH, HarmBlockThreshold.MEDIUM_AND_ABOVE)
// Specify the safety settings as part of creating the `GenerativeModel` instance
val model = Firebase.ai(backend = GenerativeBackend.googleAI()).generativeModel(
modelName = "GEMINI_MODEL_NAME ",
safetySettings = listOf(harassmentSafety, hateSpeechSafety)
)
// ...
Anda mengonfigurasi SafetySettings
saat membuat instance GenerativeModel
.
SafetySetting harassmentSafety = new SafetySetting(HarmCategory.HARASSMENT,
HarmBlockThreshold.ONLY_HIGH);
// Specify the safety settings as part of creating the `GenerativeModel` instance
GenerativeModelFutures model = GenerativeModelFutures.from(
FirebaseAI.getInstance(GenerativeBackend.googleAI())
.generativeModel(
/* modelName */ "IMAGEN_MODEL_NAME ",
/* generationConfig is optional */ null,
Collections.singletonList(harassmentSafety)
);
);
// ...
Contoh dengan beberapa setelan keamanan:
SafetySetting harassmentSafety = new SafetySetting(HarmCategory.HARASSMENT,
HarmBlockThreshold.ONLY_HIGH);
SafetySetting hateSpeechSafety = new SafetySetting(HarmCategory.HATE_SPEECH,
HarmBlockThreshold.MEDIUM_AND_ABOVE);
// Specify the safety settings as part of creating the `GenerativeModel` instance
GenerativeModelFutures model = GenerativeModelFutures.from(
FirebaseAI.getInstance(GenerativeBackend.googleAI())
.generativeModel(
/* modelName */ "IMAGEN_MODEL_NAME ",
/* generationConfig is optional */ null,
List.of(harassmentSafety, hateSpeechSafety)
);
);
// ...
Anda mengonfigurasi SafetySettings
saat membuat instance GenerativeModel
.
Contoh dengan satu setelan keamanan:
import { HarmBlockThreshold, HarmCategory, getAI, getGenerativeModel, GoogleAIBackend } from "firebase/ai";
// ...
const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });
const safetySettings = [
{
category: HarmCategory.HARM_CATEGORY_HARASSMENT,
threshold: HarmBlockThreshold.BLOCK_ONLY_HIGH,
},
];
// Specify the safety settings as part of creating the `GenerativeModel` instance
const model = getGenerativeModel(ai, { model: "GEMINI_MODEL_NAME ", safetySettings });
// ...
Contoh dengan beberapa setelan keamanan:
import { HarmBlockThreshold, HarmCategory, getAI, getGenerativeModel, GoogleAIBackend } from "firebase/ai";
// ...
const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });
const safetySettings = [
{
category: HarmCategory.HARM_CATEGORY_HARASSMENT,
threshold: HarmBlockThreshold.BLOCK_ONLY_HIGH,
},
{
category: HarmCategory.HARM_CATEGORY_HATE_SPEECH,
threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
},
];
// Specify the safety settings as part of creating the `GenerativeModel` instance
const model = getGenerativeModel(ai, { model: "GEMINI_MODEL_NAME ", safetySettings });
// ...
Anda mengonfigurasi SafetySettings
saat membuat instance GenerativeModel
.
Contoh dengan satu setelan keamanan:
// ...
final safetySettings = [
SafetySetting(HarmCategory.harassment, HarmBlockThreshold.high)
];
// Specify the safety settings as part of creating the `GenerativeModel` instance
final model = FirebaseAI.googleAI().generativeModel(
model: 'GEMINI_MODEL_NAME ',
safetySettings: safetySettings,
);
// ...
Contoh dengan beberapa setelan keamanan:
// ...
final safetySettings = [
SafetySetting(HarmCategory.harassment, HarmBlockThreshold.high),
SafetySetting(HarmCategory.hateSpeech, HarmBlockThreshold.high),
];
// Specify the safety settings as part of creating the `GenerativeModel` instance
final model = FirebaseAI.googleAI().generativeModel(
model: 'GEMINI_MODEL_NAME ',
safetySettings: safetySettings,
);
// ...
Anda mengonfigurasi SafetySettings
saat membuat instance GenerativeModel
.
Contoh dengan satu setelan keamanan:
// ...
// Specify the safety settings as part of creating the `GenerativeModel` instance
var ai = FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI());
var model = ai.GetGenerativeModel(
modelName: "GEMINI_MODEL_NAME ",
safetySettings: new SafetySetting[] {
new SafetySetting(HarmCategory.Harassment, SafetySetting.HarmBlockThreshold.OnlyHigh)
}
);
// ...
Contoh dengan beberapa setelan keamanan:
// ...
var harassmentSafety = new SafetySetting(HarmCategory.Harassment, SafetySetting.HarmBlockThreshold.OnlyHigh);
var hateSpeechSafety = new SafetySetting(HarmCategory.HateSpeech, SafetySetting.HarmBlockThreshold.MediumAndAbove);
// Specify the safety settings as part of creating the `GenerativeModel` instance
var ai = FirebaseAI.GetInstance(FirebaseAI.Backend.GoogleAI());
var model = ai.GetGenerativeModel(
modelName: "GEMINI_MODEL_NAME ",
safetySettings: new SafetySetting[] { harassmentSafety, hateSpeechSafety }
);
// ...
Setelan keamanan untuk model Imagen
Klik penyedia Gemini API untuk melihat konten dan kode khusus penyedia di halaman ini. |
Pelajari semua setelan keamanan yang didukung dan nilai yang tersedia untuk model Imagen dalam dokumentasi Google Cloud.
Anda mengonfigurasi ImagenSafetySettings
saat membuat instance ImagenModel
.
import FirebaseAI
// Specify the safety settings as part of creating the `ImagenModel` instance
let model = FirebaseAI.firebaseAI(backend: .googleAI()).imagenModel(
modelName: "IMAGEN_MODEL_NAME ",
safetySettings: ImagenSafetySettings(
safetyFilterLevel: .blockLowAndAbove,
personFilterLevel: .allowAdult
)
)
// ...
Anda mengonfigurasi ImagenSafetySettings
saat membuat instance ImagenModel
.
// Specify the safety settings as part of creating the `ImagenModel` instance
val model = Firebase.ai(backend = GenerativeBackend.googleAI()).imagenModel(
modelName = "IMAGEN_MODEL_NAME ",
safetySettings = ImagenSafetySettings(
safetyFilterLevel = ImagenSafetyFilterLevel.BLOCK_LOW_AND_ABOVE,
personFilterLevel = ImagenPersonFilterLevel.BLOCK_ALL
)
)
// ...
Anda mengonfigurasi ImagenSafetySettings
saat membuat instance ImagenModel
.
// Specify the safety settings as part of creating the `ImagenModel` instance
ImagenModelFutures model = ImagenModelFutures.from(
FirebaseAI.getInstance(GenerativeBackend.googleAI())
.imagenModel(
/* modelName */ "IMAGEN_MODEL_NAME ",
/* imageGenerationConfig */ null);
);
// ...
Anda mengonfigurasi ImagenSafetySettings
saat membuat instance ImagenModel
.
// ...
const ai = getAI(firebaseApp, { backend: new GoogleAIBackend() });
// Specify the safety settings as part of creating the `ImagenModel` instance
const model = getImagenModel(
ai,
{
model: "IMAGEN_MODEL_NAME ",
safetySettings: {
safetyFilterLevel: ImagenSafetyFilterLevel.BLOCK_LOW_AND_ABOVE,
personFilterLevel: ImagenPersonFilterLevel.ALLOW_ADULT,
}
}
);
// ...
Anda mengonfigurasi ImagenSafetySettings
saat membuat instance ImagenModel
.
// ...
// Specify the safety settings as part of creating the `ImagenModel` instance
final model = FirebaseAI.googleAI().imagenModel(
model: 'IMAGEN_MODEL_NAME ',
safetySettings: ImagenSafetySettings(
ImagenSafetyFilterLevel.blockLowAndAbove,
ImagenPersonFilterLevel.allowAdult,
),
);
// ...
Penggunaan Imagen belum didukung untuk Unity, tetapi periksa kembali nanti.
Opsi lain untuk mengontrol pembuatan konten
- Pelajari lebih lanjut desain perintah agar Anda dapat memengaruhi model untuk menghasilkan output yang spesifik untuk kebutuhan Anda.
- Konfigurasikan parameter model untuk mengontrol cara model menghasilkan respons. Untuk model Gemini, parameter ini mencakup token output maksimum, suhu, topK, dan topP. Untuk model Imagen, hal ini mencakup rasio aspek, pembuatan orang, watermark, dsb.
- Tetapkan petunjuk sistem untuk mengarahkan perilaku model. Fitur ini seperti preamble yang Anda tambahkan sebelum model diekspos ke petunjuk lebih lanjut dari pengguna akhir.
- Teruskan skema respons bersama dengan perintah untuk menentukan skema output tertentu. Fitur ini paling sering digunakan saat membuat output JSON, tetapi juga dapat digunakan untuk tugas klasifikasi (seperti saat Anda ingin model menggunakan label atau tag tertentu).