Surely you’ve seen countless videos of robots opening and strolling through doors. The filthy little secret is that the majority or all of them involved hand holding.
This may take the form of manual remote guidance, in which a user controls the process remotely in real-time, or guided training, in which the robot is led through the process once so that it can replicate the activity precisely the next time.
However, new research from ETH Zurich suggests a model that requires “minimal manual guidance.” Effectively, there are three steps involved. The user begins by describing the scene and action. Second, the system plots a route that is somewhat convoluted. And finally, it reduces the route to its minimal viable form.
“Given high-level descriptions of the robot and object,” the research paper explains, “along with a task specification encoded through a sparse objective, our planner holistically discovers: how the robot should move, what forces it should exert, what limbs it should use, as well as when and where it should establish or break contact with the object.”
The system is divided into two primary categories: robot-centric and object-centric. The former includes activities such as opening a door or a dishwasher, while the latter includes tasks such as navigating the robot around obstacles.
The team claims that the system can be adapted to different form factors; however, for the sake of simplicity, these demonstrations are performed on a quadriped — specifically, ANYbotics’ ANYmal. The startup was spun off from ETH Zurich and has become a favorite for such research initiatives as a result.
The team notes that the work can be used as a stepping stone towards “developing a fully autonomous loco-manipulation pipeline.” Thus, we are one step closer to having systems that can open doors without human intervention.