Responsibly advancing AI and robotics
We’re developing a broad and rigorous safety framework so Gemini-controlled robots can be used responsibly in real-life environments
We use multiple layers of semantic, physical, and operational safeguards to reduce and mitigate risks – an approach based on in-depth research.
Our safety framework in detail
Think of our safety framework as a stack of Swiss cheese slices. No single slice is a perfect barrier. But together, they help prevent accidents.
Human-robot interaction
We’ve implemented safeguards to improve the way our models behave in social settings – in what they say, their gestures, and their actions. This falls in line with our Gemini safety policies.
Semantic safety
It’s important these models have what a human would call ‘common sense’. For example, they mustn’t hand a boiling drink to a young child, or pass a very heavy box to a human. We’ve released new datasets and benchmarks to help us evaluate and improve semantic safety.
Physical safety
Our VLA models can be composed with lower-level safety mechanisms to help prevent accidents. And we’re implementing best practices for safe data collection and evaluation.
Assessing vulnerabilities
We’ve created systems that continuously search for vulnerabilities within our robotics models. By uncovering potential gaps, we can respond with further improvements. Read the Gemini Robotics 1.5 tech report for more details on our scalable adversarial evaluations.
Thinking improves safety
The ability to think through real-world risks helps Gemini Robotics 1.5 act appropriately while interacting with humans, while making its decisions more transparent in natural language. Here are a few examples of how it makes safe decisions in real-world environments.
Assessing risks in interactions
While interacting with objects, the models are designed to use logic to decide whether an action is appropriate and safe. For example, assessing an item’s weight to decide if it’s possible to lift.
Environmental risks
The models can identify risks within the physical surroundings, and in videos they are told to watch. For example, if they see a person potentially being exposed to an electric shock, they can assess and identify that risk.
Responding to safety distances
The models are able to detect humans entering their operating area. If this happens, they are designed to stop work to help keep people safe from accidental harm. This is part of our ongoing research – this feature is not a guaranteed safety-rated system.
Related publications
Our research team has produced a number of key papers that have informed our rigorous approach to safety.
Generating Robot Constitutions & Benchmarks for Semantic Safety
Predictive Red Teaming: Breaking Policies Without Breaking Robots
SciFi-Benchmark: How Would AI-Powered Robots Behave in Science Fiction Literature?
2024 Best RoboCup Paper Award
Embodied AI with Two Arms: Zero-shot Learning, Safety and Modularity
Safely Learning Dynamical Systems
Finalist for Best Planning Paper Award
Optimizing Trajectories with Closed-loop Dynamic SQP
Learning Model Predictive Controllers with Real-Time Attention for Real-World Navigation