Learning From Animators to Improve PR2

Now that we're making progress with improving ROS and PR2 form and functionalities, we're starting to focus some of our research efforts on robot behavior design. This means we're pushing on our human-robot interaction efforts, extending our work from personal spaces to gaining a deeper understanding of other non-verbal behaviors in robots.

We want robots to be more human-readable, meaning that anyone watching the robot can make a reasonable guess at what it is doing. Our goals for this are two-fold: increase safety and make robots more effective in their interactions with and around people. If you knew that PR2 was about to plug itself into an outlet, you would also know that it's not safe to stand between the robot and the outlet. If you knew that PR2 needed your help, then you could help it perform a task more efficiently or was otherwise impossible to do alone.

To get this research and design rolling, we are learning from our animator friends, who know much more about breathing life into inanimate objects than we do. Professional animator Doug Dooley has been helping us to prototype PR2 behaviors to make its actions more human-readable and to design more interactive behaviors for PR2 to coordinate with people.

During Milestone 2, PR2 would often sit still in front of a door, making it difficult for passersby to tell if it was just stopped in front of the door, it was trying to perceive the door, or if something had gone wrong. One possible behavior it could do to show it's working is this:

This second video shows another possible behavior for PR2 to show that it would like help with plugging itself into a wall outlet. If it turns out that the wall outlets are too difficult to find, too difficult to reach, or something else went wrong with plugging in, then PR2 could fall back on asking a passerby for help with the task like this:

These are just a couple of examples of communicative behaviors that we working out with Doug to learn how to apply techniques already perfected in animation to inform the design of more human-readable robot behaviors. In collaborating with him, our longer-term goal is to see if and how principles from animation can be used to improve both safety and effectiveness of human-robot interactions in the future, ideally testing these behaviors out across multiple robotic forms. We are also drawing from what we know about human non-verbal communication to inform the design of these behaviors.

If you are interested in seeing more of these animations, you're more than welcome to participate in our upcoming online study to evaluate these robot behaviors. Just sign up here!

-- Leila Takayama