Content safety evaluation
Evaluate the safety of prompt input and output responses against a set of defined safety policies.
Explore ShieldGemma 2
Instruction-tuned models for evaluating the safety of text and images against pre-defined safety policies. Helps evaluate and prevent generative AI applications from violating safety policies.
Evaluate the safety of prompt input and output responses against a set of defined safety policies.
ShieldGemma models are provided with open weights and can be fine-tuned for your specific use case.
Built on Gemma 2 and available in 2B, 9B, and 27B parameter sizes.
A 4B parameter image safety model built on Gemma 3.