Willow Garage Blog

August 10, 2009

University of Freiburg researcher Jürgen Sturm is just finishing his second internship here at Willow Garage. In addition to helping out with our egg-breaking efforts, he's been using the stereo cameras of PR2 to detect planar objects like doors, drawers, books, and boxes. More importantly, he has been tracking the movement of these objects to learn articulation models, i.e. how these objects move. Does the door open to the left or the right? Does the drawer slide in or out? Where will the handle of the drawer be when it is fully open? This is the sort of information that is critical for enabling robots to operate in our own environments.

We've posted a video of Jürgen discussing his findings as well as slides from a presentation that he gave at Willow Garage. You can also download his code from the planar_objects ROS package.

Planar Objects and Articulation Models on Scribd (Download PDF from ros.org)

August 4, 2009

In order to do good research, sometimes you have to break some eggs. In this case, in order to write a good controller for the PR2 gripper, Penn's Matt Piccoli has been breaking a lot of eggs. Matt is on his second stint at Willow Garage as an intern, and, together with Sachin Chitta, he crushed a lot of eggs with the PR2 gripper to figure out just how much force it takes to crack an egg. They were able to use this data to develop a better grasp controller for the PR2 gripper. They wrote a closed loop controller that switches between two modes: it starts as a velocity controller, but it then switches to a force controller when it senses the egg in its grasp using its fingertip sensors. This new controller has many uses beyond grasping eggs. By starting with a more difficult object like an egg, they can now use the same controller for less delicate objects as well.

August 1, 2009
by Jorge Cham
Comic by Jorge Cham, Ph.D Comics
Photo: Robotics professor Sebastian Thrun protects an entire wing of Willow Garage with a chess piece.

There's been some recent discussion on robo-ethics, including this Wired article as well as this front-page New York Times article. We've noticed that there is a Hollywood-inspired undercurrent of fear, particularly one of a robot uprising, that greets each new step forward in robotics, such as when we programed our PR2 robot to plug itself in. To help allay those fears, we've put together our own guide on How to Defeat a PR2 Uprising.

  1. Lay 2x4s across the ground. The PR2 is highly maneuverable on inclines up to 5 degrees, but it cannot drive over sharp obstacles like a 2x4. The PR2 can also be turned away by anything that's at least 3cm tall.
  2. Always keep a supply of liquid nearby to toss on any advancing robot. We have Naked and Odwalla fruit juice fridges in a central location and restock them frequently.
  3. Stairs are the enemy of cows, Daleks, and wheeled robots.
  4. Black power outlets are like Ninjas: invisible to the PR2.
  5. Use round door knobs. The PR2 can only enter homes that are ADA-compliant.

For a more comprehensive guide, please refer to Daniel Wilson's, "How to Survive a Robot Uprising."

More seriously, we are interested in the discussion on robot ethics and safety. We are more concerned about people commanding robots to do harm than robots doing harm on their own. How can we design hardware systems to be safer around people? How can we use software to enhance the safety of the hardware systems?

There is a longer-term discussion about what happens if people ever figure out how to replicate human intelligence in a machine. For more on the distinction between the machine intelligence that run robots like PR2 and human intelligence, check out Helen Greiner’s recent article, "Who Needs Humanoids?"

July 31, 2009

ROSROS 0.7.1 has been released! This is a patch release, with the bulk of changes addressing regressions that occurred with the 0.7.0 update. If you have experienced issues with rosrecord, roslaunch, or rxplot, you may wish to install this update.


  • roslaunch: fix to printerrlog issue in __init__.py
  • rosdep: mplement a one-step 'install' argument
  • rosrecord:
    • fix in time scaling related to playback of multiple bags.
    • fix in rpath being cleared on install to $ROS_ROOT/bin
    • fixed bug over-aggressively resizing a message buffer
  • rostopic: better error message if topic type is invalid and minor fixes
  • rxplot:
    • added missing time import
    • fix bad wxwidgets swig wrapper import
    • bug fixes to errors when invalid topic name or spec are given
  • rosbuild:
    • fix to dependency management for auto-generated source files
    • very probable fix for the delete-a-file problem
  • rospy: ignore spurious exception caused in receive_loop during interpreter shutdown
  • rosemacs: minor updates due to rostopic changes
  • rosdep: pygame, python-clearsilver, debian updates, qt3, log4cxx (OS X)
  • rosconfig: fix bug that was raising an exception in all non-svk installs
July 27, 2009

ROS 0.7

ROS 0.7 has been released. The 0.7.x series introduces many new features and bug fixes. The two biggest changes are "Publisher latching" as well as the deprecation of the old roscpp "Node" API.

Publisher latching: Publisher latching lets you automatically send the last message published to new subscribers. This has many uses cases. For example, instead of writing a map Service, you could have a map Topic. A Topic is preferable as you can then record the map data using rosrecord. Another use for publisher latching is to reduce the publication rate for data that is slowly changing, as you can now more easily ensure that new subscribers will receive the latest data automatically. Previously this was possible using subscriber callbacks, but this streamlines implementations. By default, rostopic pub now latches, so you can use it to quickly 'latch' a value into the ROS Computation Graph.

Node API deprecated: roscpp's Node API is now deprecated in favor of the NodeHandle API. Some of the NodeHandle methods have been deprecated as well. Please update your code as the Node API will be removed in future releases. Please see the notes below for specific details on NodeHandle methods that have been deprecated.

For a more detailed list of changes, please see the changelist.

To download this release, please visit the download page.

We've also put together a new tutorial to introduce the ROS concepts and associated command-line tools. This is still a work-in-progress, but feel free to check it out.

July 17, 2009

Awhile back we showed a demo video using an iPod Touch to drive around of a PR2. It was a fun experiment, but it relies on a proof-of-concept that's not quite ready for primetime.

Srećko Jurić-Kavelj of the University of Zagreb showed us that there's more than one way to get data from an iPod Touch into ROS. Instead of cross-compiling ROS onto the iPhone, which is still very difficult, he used the open-source accelerometer-simulator project on Google Code to receive the accelerometer on another computer. He was then able to easily adapt a sample Python script to broadcast that data as a rospy node.

You can see the results here as he drives around a Pioneer 3DX:

July 15, 2009

Several of our researches attended the IJCAI 2009 Workshop on Robotics, which was held this past Monday.There were a lot of great talks, including Aarod Edsinger of Meka Robotics talking about Meka's 7 DOF arm and 4-5 DOF hand, as well as Giorgio Metta of the University of Genoa talking about the iCub project. The iCub project is similar to our goals with the PR2/ROS projects: it also combines an open source platform (YARP) with a robot platform that is being given away to researchers.

Willow Garage's Brian Gerkey gave a talk, "Towards a Robot App Store," which discusses some of the goals and philosophy of our ROS software -- we hope they will lead to a future where robotics applications are easily written and shared. You can browse and download the slides below.

Towards a Robot App Store on Scribd (Download PDF from ros.org)

July 14, 2009

Now that we're making progress with improving ROS and PR2 form and functionalities, we're starting to focus some of our research efforts on robot behavior design. This means we're pushing on our human-robot interaction efforts, extending our work from personal spaces to gaining a deeper understanding of other non-verbal behaviors in robots.

We want robots to be more human-readable, meaning that anyone watching the robot can make a reasonable guess at what it is doing. Our goals for this are two-fold: increase safety and make robots more effective in their interactions with and around people. If you knew that PR2 was about to plug itself into an outlet, you would also know that it's not safe to stand between the robot and the outlet. If you knew that PR2 needed your help, then you could help it perform a task more efficiently or was otherwise impossible to do alone.

To get this research and design rolling, we are learning from our animator friends, who know much more about breathing life into inanimate objects than we do. Professional animator Doug Dooley has been helping us to prototype PR2 behaviors to make its actions more human-readable and to design more interactive behaviors for PR2 to coordinate with people.

During Milestone 2, PR2 would often sit still in front of a door, making it difficult for passersby to tell if it was just stopped in front of the door, it was trying to perceive the door, or if something had gone wrong. One possible behavior it could do to show it's working is this:

This second video shows another possible behavior for PR2 to show that it would like help with plugging itself into a wall outlet. If it turns out that the wall outlets are too difficult to find, too difficult to reach, or something else went wrong with plugging in, then PR2 could fall back on asking a passerby for help with the task like this:

These are just a couple of examples of communicative behaviors that we working out with Doug to learn how to apply techniques already perfected in animation to inform the design of more human-readable robot behaviors. In collaborating with him, our longer-term goal is to see if and how principles from animation can be used to improve both safety and effectiveness of human-robot interactions in the future, ideally testing these behaviors out across multiple robotic forms. We are also drawing from what we know about human non-verbal communication to inform the design of these behaviors.

If you are interested in seeing more of these animations, you're more than welcome to participate in our upcoming online study to evaluate these robot behaviors. Just sign up here!

-- Leila Takayama

July 13, 2009

The slides from last week's RSS 2009 workshop presentations are included below for those who were at the talks and would like the slides, or for those that weren't there and would like to skim through them. For Sachin Chitta's Mobile Manipulation Workshop presentation, which gives an overview of our Milestone 2 research and accomplishments, please see the previous post.

Angelic Hierarchical Planning: The PR2 robot has to make decisions at many levels, all the way from deciding which task to do (e.g. "Grasp an object") down to planning the motion of the arm to accomplish that task. This presentation describes some of the preliminary results in our research into integrating these different levels of planning.

Angelic Hierarchical Planning

Towards a Science of Robotics: Just as we hope that platforms like ROS and the PR2 will help foster reproducibility of results, so, too, can greater scientific rigor. Leila Takayama looks at two studies in human-robot interaction in order to highlight issues of scientific rigor as we move towards a science of robotics. (full paper [pdf].)

Takayama RSS2009 Workshop

July 11, 2009

Update: the issues with the video have been worked out, but you can still use the links below to download the video if you prefer.

We've been receiving reports that many of you outside the US have been unable to view our Milestone 2 video with messages like, "This video is not available in your country due to copyright restrictions." We've are working to fix this and have some workarounds below, but first some explanation.

YouTube has informed us that the video "includes audio content that is owned or licensed by UMG [Universal Music Group]." We would find this humorous if it didn't also mean that many of you are unable to watch the video. The video contains almost no audio other than the occasional sound of the PR2 opening a door and plugging in, as well as some applause at the end. We've been joking that it must have matched John Cage's famous 4'33", which consists of four minutes and thirty-three seconds of silence. We wish we knew what song Universal Music Group thinks it matches as we have been looking for some good music to go along with the sounds of the PR2.

We disputed this bogus claim, but our dispute was rejected. We're now entering into a more formal process to file a DMCA counterclaim. Unfortunately, it may take a couple weeks to resolve.

In the meantime, we're happy to provide you links to the original video files used to create the YouTube version, as well as an iPhone-friendly version:

Milestone 2 Highlights (iPhone, 28MB)

Milestone 2 Highlights (HD, 196MB)