OpenRAVE and ROS

OpenRAVE + PR2

Rosen Diankov of CMU just finished up a winter internship here at Willow Garage. Here's his report on the work he did helping Willow Garage improve its manipulation capabilities:

Manipulation is a key component to enable actuation of the environment. During my short stay at Willow Garage, we worked towards getting the PR2 robot to autonomously manipulate boxes on a table using the OpenRAVE planning system developed at Carnegie Mellon University (CMU). What makes OpenRAVE worth using is that its grasp and manipulation planning algorithms do not require manual human domain knowledge to complete the task; all the planners and algorithms easily generalize across robots without needing to tweak parameters. OpenRAVE has been used to write robust manipulation scripts that have been tuned and tested on multiple robot systems for the past 2 years.

During the first week at Willow Garage, we were able to quickly identify the features needed for ROS to support the OpenRAVE integration. Because Willow Garage supports an open environment, these features were quickly discussed upon, approved on, developed, debugged, and completed. Some of the packages include

  • openrave: an OpenRAVE server running on top of the ROS framework providing manipulation planning and grasping services
  • rosoct: Native Octave/Matlab ROS bindings for scripting.
  • laser_camera_calibration: Calibration routines for the real world sensors
  • robot_self_filter: filters for removing robot links from sensor data

In the end, the robot was able to run for long periods of time without any component crashing. One of the advantages of using OpenRAVE in conjunction with ROS is that it is easy to prototype demos and complex systems. The user does not have to take down the entire system when making minor changes to code -- in fact he only has to take down that one component of interest; the rest of the components are designed to be robust against these changes.

Below is a video showing the beginnings of manipulation on the PR2. The video first starts off showing how the grasps are automatically generated and shows a proof of concept in simulation. Then it shows the real robot following these same tasks. In the upper left corner is the output of the sensor world in OpenRAVE. At every grasp attempt, the robot makes a clone of this environment and analyzes it for possible grasps. Environment clones remove the need to worry about objects moving in and out of the scene as the planner executes. Trajectories are sent to the real robot as parts of the plan are done. In between pauses, the robot updates its internal view of the world with the new robot values and begins the next set of plans from there.

In the near future, we plan to integrate dynamic obstacles tracked by the laser range finder with the manipulation planner. The robot arm should be able to dynamically modify its trajectories as new obstacles are detected with the scanner. This will obviate the need for a motion capture system to track the robot and the table and will allow us to perform this manipulation demo virtually anywhere!