Gemini Robotics models allow robots of any shape and size to perceive, reason, use tools and interact with humans. They can solve a wide range of complex real-world tasks – even those they haven’t been trained to complete.

Gemini Robotics 1.5 is designed to reason through multi-step complex tasks, and to make decisions to form a plan of action. It will then work to carry out each step autonomously.



Responsibly advancing AI and robotics

To ensure Gemini Robotics benefits humanity, we’ve taken a comprehensive approach to safety, from practical safeguards to collaborations with experts, policymakers, and our Responsibility and Safety Council.


Model and tools

We take a dual-model approach, pairing a vision-language-action (VLA) with an embodied reasoning (ER) model. Each model plays a specialized role, working together as a powerful and versatile system.

table_eye

Gemini Robotics 1.5

Our most capable vision-language-action (VLA) model. It can ‘see’ (vision), ‘understand’ (language) and ‘act’ (action) within the physical world. It processes visual inputs and user prompts, learning within different embodiments and increasing its ability to generalize problem-solving.

deployed_code

Gemini Robotics-ER 1.5

Our state-of-the-art embodied reasoning model. It specializes in understanding physical spaces, planning, and making logical decisions relating to its surroundings. It doesn’t directly control robotic limbs – but provides high-level insights to help the VLA model decide what to do next.

memory

Gemini Robotics On-Device

This iteration of our VLA model is incredibly versatile, and optimized to run locally on robotic devices. This will allow robotics developers to adapt the model to improve performance on their own applications.


Gemini Robotics SDK

This iteration of our VLA model is incredibly versatile, and optimized to run locally on robotic devices. This will allow robotics developers to adapt the model to improve performance on their own applications.


Experience Gemini Robotics

If you're interested in testing our models, please share a few details to join the waitlist.