Willow Garage Blog

July 11, 2009

Update: the issues with the video have been worked out, but you can still use the links below to download the video if you prefer.

We've been receiving reports that many of you outside the US have been unable to view our Milestone 2 video with messages like, "This video is not available in your country due to copyright restrictions." We've are working to fix this and have some workarounds below, but first some explanation.

YouTube has informed us that the video "includes audio content that is owned or licensed by UMG [Universal Music Group]." We would find this humorous if it didn't also mean that many of you are unable to watch the video. The video contains almost no audio other than the occasional sound of the PR2 opening a door and plugging in, as well as some applause at the end. We've been joking that it must have matched John Cage's famous 4'33", which consists of four minutes and thirty-three seconds of silence. We wish we knew what song Universal Music Group thinks it matches as we have been looking for some good music to go along with the sounds of the PR2.

We disputed this bogus claim, but our dispute was rejected. We're now entering into a more formal process to file a DMCA counterclaim. Unfortunately, it may take a couple weeks to resolve.

In the meantime, we're happy to provide you links to the original video files used to create the YouTube version, as well as an iPhone-friendly version:

Milestone 2 Highlights (iPhone, 28MB)

Milestone 2 Highlights (HD, 196MB)

 

July 2, 2009

A lot of you watched the video of our Milestone 2 run and had plenty of questions, so we put together a video that explains everything in better detail. The video talks about what sort of sensors we use and gives more insight into what the PR2 alpha robot is doing at each step of the process.

We know that a lot of our research audience out there would like more detail than a short, 5-minute YouTube video can provide, so here are slides from Sachin Chitta's presentation that was given at the Workshop on Mobile Manipulation in Human Environments at RSS 2009. It provides a more technical overview of the algorithms, representations, and software systems used. We apologize that videos from the talk are not included, but you should be able to see similar footage in the video above.

rss-main-09

June 30, 2009

Sensor Head Buildup

The first prototypes of our new sensor heads are coming together and we should be able to get some test data off of them soon. The lenses that you see here (left-right) are a 5 megapixel Prosilica camera, a dual-stereo Videre camera (wide-angle color and narrow-angle mono), and a red LED pattern projector. The exposure of the narrow-angle stereo pair is synchronized with the flashing of the pattern projector and the rest of the lenses are "anti-synchronized," i.e. they expose when the LED is off.

You can read our previous post on the design of the new sensor head to get a little more background on everything that led up to what you see here, as well as a preview of what it will look like when it gets all prettied up.

June 30, 2009

Doors

Our biggest hope for the PR2 is that it will be a cutting-edge research platform for creating capable robots, so a huge congratulations and thanks to Radu Bogdan Rusu and Michael Beetz of TUM as well as Wim Meeussen and Sachin Chitta of Willow Garage for getting an Best Paper award at International Conference on Advanced Robotics (ICAR 2009). ICAR 2009's theme was "able robots," which was also a focus of our Milestone 2 goal of having the PR2 navigate for 26.2 miles, open doors, and plug itself in. Their paper, "Laser-based Perception for Door and Handle Identification," describes in great detail some of the door-detection algorithms that were instrumental in achieving this milestone, which required the PR2 alpha robot to successfully open multiple closed and partially open doors.

Here's the abstract:

"In this paper, we present a laser-based approach for door and handle identification. The approach builds on a 3D perception pipeline to annotate doors and their handles solely from sensed laser data, without any a priori model learning. In particular, we segment the parts of interest using robust geometric estimators and statistical methods applied on geometric and intensity distribution variations in the scan. We present experimental results on a mobile manipulation platform (PR2) intended for indoor manipulation tasks. We validate the approach by generating trajectories that position the robot effector in front of door handles and grasp the handle. The of our approach is demonstrated by real world experiments on a large set of doors."

Paper

June 30, 2009

ROSROS 0.6.1 has been released. This is a minor bug-fix update. Updating is not essential unless you've encountered the problems mentioned below.

Fixes:

  • rosrecord: fix_md5sums.py executable permission was not set. If you need to use fix_md5sums.py script and you do not want to download the 0.6.1 release, you can simply set the executable bit manually by running:
    chmod +x `rospack find rosrecord`/scripts/fix_md5sums.py
  • roscpp:
    • gcc 4.4 support
    • Added operator= overload on NodeHandle
    • Bug fix for serializing array of times
  • roslisp
    • Bug fix for dealing with lookup-hostname-ip-address
    • Bug fix for fixed-size arrays
June 26, 2009

ROS 0.6

ROS 0.6.0 has been released! Our big goals for this release were to make bag files more robust, add in UDP transport for C++ users and prepare for our upcoming releases of stacks (e.g. navigation). With previous releases, bag files would become unreadable if the messages they were recorded from were changed. The new bag file format enables bag files to always be read and comes with tools for migrating them in the event of changes. As a result of these updates, all of your existing bag files will need to be updated. There are more instructions on our more detailed change list.

Related to our bag file changes, we've also changed our md5sum calculation for message versions to make it easier to move, rename, and add comments to messages. You can now perform these changes without altering the md5sum.

We've added in UDP as an experimental transport to our C++ library. This UDP transport is currently point-to-point and targeted at low-latency applications for ROS, such as teleoperation.

Another major update is the introduction of the searchParam API to the Parameter Server. In the past, it was difficult to push node down into child namespaces and still have them share common parameters. For example, you may wish to set a parameter like the robot description in a parent namespace and have all children easily read from it. The searchParam API solves this problem by allowing you to search for a parameter in parent namespaces, returning the parameter key that is closest to your node's namespace.

There are many other updates in this release, so please consult the detailed list of changes. You can also consult our roadmap to get a preview of the remaining features we are planning for ROS as we move towards a 1.0 release.

June 25, 2009


Several us will be at Robotic Science and Systems (RSS) 2009 next week in Seattle. Here's a partial schedule:

Sunday, June 28, 2009

Workshop on Mobile Manipulation in Human Environments

Willow Garage is co-organizing this workshop, which will discuss state-of-the-art research in mobile manipulation. Sachin Chitta will be giving a talk during Session I (9:50-10:10) about our progress with autonomous door opening and plugging.

Workshop on Algorithmic Automation

Gary Bradski will be giving a talk from 11:00-11:25 during Session 2. He will discuss a proposed modular, scalable architecture for recognizing objects and their 6DOF pose

Workshop on Good Experimental Methodology in Robotics

Leila Takayama will be giving a talk from 9:30-9:50. She will be presenting her paper Toward a Science of Robotics: Goals and Standards for Experimental Research.

Workshop on Bridging the Gap Between High-Level Discrete Representations and Low-Level Continuous Behaviors

Bhaskara Marthi will be presenting on "Angelic Hierarchical Planning" from during Session 4 from 5:00-5:20. His talk will discuss the use of "angelic semantics" to integrate planning at various levels of abstraction. He will also present some preliminary results showing how task and robotic motion planning can be combined using this approach.

Monday, June 29, 2009 6:30 – 9:30 Poster Session (Poster 16)

View-Based Maps, Kurt Konolige, Michael Colander, James Bowman, Patrick Mihelich, JD Chen, Pascal Fua, Vincent Lepetit

Abstract: Robotic systems that can create and use visual maps in realtime have obvious advantages in many applications, from automatic driving to mobile manipulation in the home. In this paper we describe a mapping system based on retaining stereo views of the environment that are collected as the robot moves. Connections among the views are formed by consistent geometric matching of their features. Out-of-sequence matching is the key problem: how to find connections from the current view to other corresponding views in the map. Our approach uses a vocabulary tree to propose candidate views, and a strong geometric filter to eliminate false positives essentially, the robot continually re-recognizes where it is. We present experiments showing the utility of the approach on video data, including map building in large indoor and outdoor environments, map building without localization, and re-localization when lost.

June 19, 2009

New Sensor Head

Related: PR2 Sensor Head Design Gallery

Although we are an open company, we've held some secrets back: we've tried to keep the appearance of the final PR2 robot under wraps. Part of that is because we are still tweaking some details and don't want to confuse people with non-final renderings, but part of it is also so that it will be a fun surprise when the first PR2 beta robots roll off the line. Last week we started selecting the colors for the PR2 beta robots, so it's fair to say that the design process is coming to close and we have a better idea of what the robot will look like.

While we won't fully divulge what the final robot design (we like our surprises), we let the cat partly out of the bag with some photos that have been going out to media publications. It's only fair to share those here with you and also talk about some of the new details that you see. The photo above is still a PR2 alpha prototype, but regular visitors may be wondering what that row of six lenses is. That's a mockup of our sensor head that will be shipping with the PR2 beta robots.

Our PR2 alpha robots have not had a sensor head design. Instead, the PR2 alpha head is just a grid of bolt holes that have allowed our researchers to experiment with different sensors in order to determine what will ship with the PR2 beta robot. This has meant that the PR2 alpha robots have been a bit more "rugged" looking -- we've referred to one of the robots as the "Mad Max" robot because it looks like it's been stealing sensors off of other robots. Our first two milestones stressed the sensing capabilities of the robot and have given us time to develop our own sensors to support bleeding-edge perception research.

With a Hokuyo UTM laser on the base, a tilting Hokuyo UTM just below the head, and two wide-angle cameras in the forearms, it was up to the sensor head to fill out our remaining sensing needs. We knew we would need stereo cameras to support algorithms like Visual SLAM as well as finding door handles, plugs, and other objects. We would also need a camera that could provide higher resolution images. After testing various high-resolution cameras, we settled on 5-megapixel Prosilica cameras for their simplicity, global shutter and resolution. For stereo, we settled on two stereo cameras: one color and one mono. The mono camera's 1-2ms exposure is synced with the LED light projector that projects a static pattern, providing the stereo camera with improved texture for feature matching to give you much better 3D data from the camera. The rest of the cameras are 'anti-synced' so the pattern does not show up in their images. The LED pattern projector was developed in-house and works in combination with the mono stereo camera to give back high-quality 3D shapes. This will be critical as researchers use the PR2 to manipulate objects on tables.

Sensor Head Section

For the physical design of the sensor head, we consulted with a broad range of people. Our researchers gave us feedback on various mounting options for the final sensor head, which mainly boiled down to, "More bolt pattern!" We considered a wide range of sensor arrangements, some of which we included below, and we worked with a color expert to see how color and shapes affected how the sensors were perceived. We also conducted surveys on Mechanical Turk to understand how others would react to them and consulted the human-robot interaction design literature, where we learned that robot heads that are wider than they are tall are perceived as being less human-like. This helped us to rule out tall head designs that would encourage unreasonable human-like expectations of PR2.

sensor head brainstorming

More: PR2 Sensor Head Design Gallery

Our designs converged on the layout with the four stereo camera lenses combined into a single unit in the middle and the two larger lenses on the outside. We finally moved the stereo camera lenses lower than the others after seeking some inspiration from the Mini Cooper headlight designs, which are both mechanical and approachable. The problem with this layout is that this combined stereo camera didn't exist. A custom stereo camera was developed with Videre Designs to adapt existing designs into a new combined sensor that interleaves the mono and color cameras in a single package.

We sent all this feedback to our industrial designer who was able to take these broad range of requirements and distill them into a cohesive design. We worked through several iterations of sketches, which led to CAD models, which then led to this foam mockup, where we tested how faceplates would affect the appearance of the robot. The mockups are important as sketches and 3D renderings have trouble capturing what something looks like in person.

Based on this foam mockup, we made some more modifications and produced the mockup you see in the first photo. Orders are now being placed for the real parts and we're looking forward to seeing the final product. As you can see, there's still plenty of bolt-hole pattern so you can continue to do the same sort of research we did in finding the best sensors for the job.

Sensor Head Mockups Sensor Head Mockups

June 19, 2009

Door opening

Our second milestone meant a lot to us as a company but we didn't quite expect the attention it would get from the rest of the community. We shot some new video this week to try and explain better what we did and how we did it, so look for those in future posts. We're also working hard to cleanup the code we used so that all the robot coders out there can get more involved with our open source efforts.

In the meantime, we want to say a big thank you to all of you that have been writing articles on what we've been doing and offering your own perspectives. It has meant a lot to the people around the office when we see these articles show up in our blog readers, when our friends send us links, and when we can forward articles to our moms.

To share that love back, a big thanks to (and sorry to those we missed):

June 16, 2009

ROSROS 0.5.3 has been released, which is a minor update to our ROS 0.5 release. Most of the changes in this release address installation and build compatibility issues. We are in the process of transitioning our build and installation process to better support a broader range of Linux platforms as well as provide for more automated installation. Release notes are below.

Change List

  • rosbuild: small fixes to support CMake 2.4.6
  • roslaunch: add dependency on rosout
  • rospy: bug fix to private command-line parameters
  • rosdep: updated dependency database with Debian package names
  • rosconfig: beginnings of a fully-automated installation method