Jump to Content

RT-Trajectory: Robotic Task Generalization via Hindsight Trajectory Sketches

Published
View publication Download

Abstract

In order for generalist robot policies to operate robustly in the real world, they must be able to accomplish many tasks in a wide array of situations, potentially beyond those seen in training datasets. Recent works have studied generalization in such settings, by leveraging policy conditioning with modalities such as natural language or object-centric visual representations. While these methods have shown promise in high-level generalization to semantics, objects, or visual distribution shifts, there remain significant open challenges in novel low-level motion generalization. This challenge is both a theoretical and practical barrier in scaling robot learning methods, as real-world robot situations may demand significantly more low-level motion capabilities than contained in common large-scale tabletop pick and place datasets. Towards tackling this, we propose policy conditioning with rough trajectory sketches, which strike a balance between being detailed enough to express low-level motion-centric guidance while being coarse enough to require learning-based policies to interpret the trajectory sketch in the context of situational visual observations. The trajectory sketch is also a scalable and flexible representation: during training it can be generated in hindsight from proprioception sensors and during inference time it can be specified through simple human inputs like drawing or videos, or through automated methods such as modern image-generating or waypoint-generating methods. We present trajectory-conditioned policies within a holistic framework, which we demonstrate at scale on a variety of real world robotic tasks which require diverse motions that go beyond tabletop pick and place tasks that our method was trained on.

Authors

Jiayuan Gu, Sean Kirmani, Paul Wohlhart, Yao Lu, Montserrat Gonzalez Arenas, Kanishka Rao, Wenhao Yu, Chuyuan Fu, Keerthana Gopalakrishnan, Zhuo Xu, Priya Sundaresan, Peng Xu, Hao Su, Karol Hausman, Chelsea Finn, Quan Vuong, Ted Xiao

Venue

ICLR 2024