Willow Garage Blog

March 23, 2010

Robots Using ROS

The Robots Using ROS series is continuing over at ROS.org. The latest installments are:

  • Care-O-bot 3: Fraunhofer IPA's mobile manipulation platform has broad support for ROS. The accompanying open-source repository includes everything from device drivers to simulation in Gazebo.
  • Bosch RTC's robot: Bosch RTC's Segway RMP-based robot has been used to develop new libraries for ROS, including an exploration stack.
  • EL-E and Cody: Georgia Tech's Healthcare Robotics Lab has released drivers and has also released code to accompany research papers.
  • Kawada HPR2-V: the JSK Lab at Tokyo University has integrated this omni-directional variant of the HRP-2 with the ROS navigation stack.
  • Prairie Dog: the Correll Lab at Colorado University uses this iRobot Create-based platform for teaching and research.

Previously:

  • STAIR 1: the Stanford University mobile manipulation research platform that provided the predecessor of the ROS framework.
  • Aldebaran Nao: a small, commercially available humanoid robot that demonstrated the ability of the ROS community (Brown University and University of Freiburg) to come together and develop open source drivers.
  • i-Sobot: an even smaller humanoid robot controlled by the ROS PS3 joystick driver. The developer has been publishing a Japanese-language blog on ROS, helping ROS reach new audiences.
  • Junior: Stanford Racing's autonomous car that finished a close second in the DARPA Urban Challenge. Junior's main software framework is IPC, but ROS's modular libraries have made it easy to integrate ROS-based perception libraries into their obstacle classification system.

For more information, please see the full posts on ROS.org.

March 17, 2010

ROS_1.1_640w.png

ROS 1.1 has been released. This is an unstable development release to test and develop features for ROS 1.2 (see version policy). Install at your own risk.

firebreak.jpg

Major changes:

  • C++: There are major changes to roscpp, including a new message serialization API, custom allocator support, subscription callback types, non-const subscriptions, and fast intraprocess message passing. These changes are backwards-compatible, but require further testing and API stabilization.

  • Python: The rospy client library is largely unchanged, though the ROS Master has been moved to a separate package. There are many internal API changes being made to the Python code base to support the rosh scripting environment under active development.

For future development plans for the ROS 1.1.x branch, please see the ROS roadmap.

Full Changelist

March 16, 2010

arm_navigation.png

Efforts are underway to develop a navigation stack for the arms analogous to the navigation stack for mobile bases. This includes a wide range of libraries that can be used for collision detection, trajectory filtering, motion planning for robot manipulators, and specific implementations of kinematics for the PR2 robot.

As part of this effort, Willow Garage is happy to announce the release of our first set of research stacks for arm motion planning. These stacks include a high-level arm_navigation stack, as well as general-purpose stacks called motion_planners, motion_planning_common, motion_planning_environment, motion_planning_visualization, kinematics, collision_environment, and trajectory_filters. There are also several PR2-specific stacks. All of these stacks can be installed on top of Box Turtle:

Installation Page

Significant contributions were made to this set of stacks by our collaborators and interns over the past two years:

  • Ioan Şucan (from Lydia Kavraki's lab at Rice) developed the initial version of this framework while an intern at Willow Garage over the summer of 2008 and 2009, and has continued to contribute significantly since. His contributions include the OMPL planning library that contains a variety of probabilistic planners including ones developed by Lydia Kavraki's lab over the years.
  • Maxim Likhachev's group at Penn (including Ben Cohen, who was a summer intern at Willow Garage in 2009) contributed the SBPL planning library that incorporates the latest techniques in search based motion planning.
  • Mrinal Kalakrishnan from USC developed the CHOMP motion planning library while he was an intern at Willow Garage in 2009. This library is based on the work of Nathan Ratliff, Matthew Zucker, J. Andrew Bagnell and Siddhartha Srinivasa.

Additional contributions also came from Radu Rusu, Matei Ciocarlei and Kaijen Hsiao (from Willow Garage) and Rosen Diankov (from CMU).

These stacks are currently classified as research stacks, which means that they have unstable APIs and are expected to change. We expect the core libraries to reach maturity fairly quickly and be released as stable software stacks, while other stacks will continue to incorporate the latest in motion planning research from the world-wide robotics community. We encourage the community to try them out to provide feedback and contribute. A good starting point is the arm_navigation wiki page. There is also a growing list of tutorials.

Here are some blog posts of demos that show these stacks in use:

  1. JSK demo (pr2_kinematics)
  2. Robot replugged (pr2_kinematics)
  3. Hierarchical planning (OMPL, move_arm)
  4. Towers of Hanoi (move_arm)
  5. Detecting tabletop objects (move_arm)

You can also watch the videos below that feature the work of Ben Cohen, Mrinal Kalakrishnan, and Ioan Şucan.

The individual stack and package wiki pages have descriptions of the current status. We have undergone a ROS API review for most packages, but the C++ APIs have not yet been reviewed. We encourage you to use the ROS API -- we will make our best effort to keep this API stable. The C++ APIs are being actively worked on (see the Roadmap on each Wiki page for more details) and we expect to be able to stabilize a few of them in the next release cycle.

Please feel free to point out bugs, make feature requests, and tell us how we can do better. We particularly encourage developers of motion planners to look at integrating their motion planners into this effort. We have made an attempt to modularize the architecture of this system so that components developed by the community can be easily plugged in. We also encourage researchers who may use these stacks on other robots to get back to us with feedback about their experience.

Best Regards,

Your friendly neighborhood arm navigation development team

Sachin Chitta, Gil Jones (Willow Garage)
Ioan Şucan (Rice University)
Ben Cohen (University of Pennsylvania)
Mrinal Kalakrishnan (USC)

Videos

Ioan Şucan, OMPL (blog post):

Ben Cohen, SBPL (blog post):

Mrinal Kalakrishnan, CHOMP (blog post):

March 15, 2010

In recent posts, we've showcased the rviz 3-D visualizer and navigation stack, two of the many useful libraries and tools included in the ROS Box Turtle distribution. Now, we'd like to highlight what we're developing for future distribution releases.

The first is the PR2 calibration stack. The PR2 has two pairs of stereo cameras, two forearms cameras, one high-resolution camera, and both a tilting and base laser rangefinder. That's a lot of sensor data to combine with the movement of the robot.

The PR2 calibration stack recently made our lives simpler when updating our plugging-in code.  Eight months ago, without accurate calibration between the PR2 kinematics and sensors, the original plugging-in code applied a brute force spiralling approach to determining an outlet's position. Our new calibration capabilities give the PR2 the new-found ability to plug into an outlet in one go.

The video above shows how we're making calibration a simpler, more automated process for researchers. The PR2 robot can calibrate many of its sensors automatically by moving a small checkerboard through various positions in front of its sensors. You can start the process before lunch, and by the time you get back, there's a nicely calibrated robot ready to go.  We're also working on tools to help researchers understand how well each individual sensor is calibrated.

The PR2 calibration stack is still under active development, but can be used by PR2 robot users with Box Turtle. In the future, we hope this stack will become a mature ROS library capable of supporting a wide variety of hardware platforms.

March 15, 2010

Logging and playback is one of the most critical features when developing software for robotics. Whether you're a hardware engineer recording diagnostic data, a researcher collecting data sets, or a developer testing algorithms, the ability to record data from a robot and play it back is crucial. ROS supports these needs with "Bag files" and tools like rosbag and rxbag.

rosbag can record data from any ROS data source into a bag file, whether it be simple text data or large sensor data, like images. The tool can also record data from programs running on multiple computers. rosbag can play back this data just as it was recorded -- it looks identical. This means that data can be recorded from a physical robot and used to create virtual robots to test software. With the appropriate bag file, you don't even need to own a physical robot.

There are a variety of tools to help you manage your stored data, such as tools for filtering data and updating data formats. There are also tools to manage research workflows for sending bag files to Amazon's Mechanical Turk for dataset labeling.  For data visualization, there is the versatile rxbag tool. rxbag can scan through camera data like a movie editor, plot numerical data, and view the raw message data. You can also write plugins to define viewers for your own data types.

You can watch the video to see rosbag and rxbag in action. Both of these tools are part of the ROS Box Turtle release, which you can download from ROS.org.

March 11, 2010

Dallas and Rob

Last Friday was Discovery Day at William Regnart Elementary school in Cupertino, California.  Among the career day presenters were doctors, firefighters, nurses, dog trainers, and a Texai.  Dallas Goecker, a Willow Garage electrical engineer working from Seymour, Indiana, and Rob Wheeler, a local Willow Garage software engineer, presented their jobs and the story of the Texai, our tele-operated tele-presence robot.     

Dallas was able to weave around desks and fourth-graders, demonstrating the capabilities of his robot.  Despite some teasing after losing wireless connectivity on the playground, Dallas and Rob received certificates of appreciation, one of which is still proudly taped to the Texai.

Dallas

[The Texai is previously referred to as the Texas.]

March 9, 2010

Robots

The ROS community has grown quickly over the past year, greatly increasing the number and variety of robots integrated with ROS. Over on ROS.org, there's a blog series highlighting some of these robots, as well as the open source ROS Repositories behind them.

The first four installments include:

  • STAIR 1: the Stanford University mobile manipulation research platform that provided the predecessor of the ROS framework.
  • Aldebaran Nao: a small, commercially available humanoid robot that demonstrated the ability of the ROS community (Brown University and University of Freiburg) to come together and develop open source drivers.
  • i-Sobot: an even smaller humanoid robot controlled by the ROS PS3 joystick driver. The developer has been publishing a Japanese-language blog on ROS, helping ROS reach new audiences.
  • Junior: Stanford Racing's autonomous car that finished a close second in the DARPA Urban Challenge. Junior's main software framework is IPC, but ROS's modular libraries have made it easy to integrate ROS-based perception libraries into their obstacle classification system.

For more information, please see the full posts on ROS.org.

March 8, 2010

Three researchers from the JSK Lab at Tokyo University, Ryo Hanai, Kimitoshi Yamazaki and Hiroaki Yaguchi, took advantage of their two-week Spring vacation to gain some hands-on experience with our PR2 Beta robots and ROS 1.0.  This was JSK's second visit to Willow Garage, and we were excited to watch this new group of researchers use the PR2 for the first time.     

With only a couple of ROS tutorials under their belts, the researchers learned how to use ROS libraries for 3D perception, navigation, and controllers. During their two-week stay, the team put together four demos that accomplished various cleanup tasks.  In the first, the PR2 detected items on a table and transferred them to a tray before carrying everything across the room.  The second and third demos had the PR2 pick up and put away dishes and clothing, while the final demo used visual features to differentiate between two similar items (book and box). 

These demos required a variety of perception approaches. The group applied a circular feature detector to identify the patternless plate, and used the shirt's wrinkles as features to identify the clothing.  To discern between the box and book, the PR2 pulled each item closer to its cameras, and then used SURF for item identification.  We hope our new tidy PR2 means that we can soon stop picking up after ourselves around the office.

We were very impressed with what the JSK visitors were able to accomplish, especially with community efforts to translate the ROS documentation only just underway. Check out the video to see the demos and hear more about JSK's experience at Willow Garage.  You can also take a look at our visit to Japan last June, when we worked together to port the Navigation stack to JSK's Kawada HRP2-V robot.

March 7, 2010

In addition to core robotics libraries, like navigation, the ROS Box Turtle release also comes with a variety of tools for developing robotics algorithms and applications. One of the most commonly used tools is rviz, the 3-D visualization environment that is part of the ROS visualization stack.

Whether it's 3-D point clouds, camera data, maps, robot poses, or custom visualization markers, rviz can display customizable views of various types of robot data. rviz can show you the difference between the physical world and what the robot is actually seeing, and it can also help you create displays that show users what the robot is planning to do.

You can watch the video above for more details about what the ROS rviz tool has to offer, and you can read documentation and download the source code at: ros.org/wiki/rviz.

March 3, 2010

Now that the ROS Box Turtle release is out, we'd like to highlight some of its core capabilities, and share some of the features that are in the works for the next release.

First up is the ROS Navigation stack, perhaps the most broadly used ROS library. The Nagivation stack is in use throughout the ROS community, running on robots both big and small. Many institutions, including Stanford University, Bosch, Georgia Tech, and the University of Tokyo, have configured this library for their own robots.

The ROS Navigation stack is robust, having completed a marathon -- 26.2 miles -- over several days in an indoor office environment.  Whether the robot is dodging scooters or driving around blind corners, the Navigation stack provides robots with the capabilities needed to function in cluttered, real-world environments.

You can watch the video above for more details about what the ROS Navigation stack has to offer, and you can read documentation and download the source code at: ros.org/wiki/navigation.