Visual Odometry for the PR2

Our initial milestone with the PR2 used laser data and AMCL (Adaptive Monte Carlo Localization) in order to determine the position of the PR2 robot, but we are looking to integrate stereo camera data in order to calculate the PR2's position more robustly in 3D. Our perception group has been hard at work developing these new capabilities and you can see some of their recent work in this video. This video shows a test of visual_odometry package, which uses a Videre stereo camera to track the position of the PR2 as it makes a circuit around the room. 

Visual odometry works by first finding good image features to track (green points in bottom left window) and matches them from frame to frame (green lines in bottom left window). It uses these point tracks to compute a likely pose for each frame as well as its path (bottom right). As the visual odometry is tracking the position of the robot in 3D, it is also calculating the horizon line (top left window).

The visual odometry system was accurate to within 0.125 meters over the 25 meter journey, which is an error of about 0.5%. In the future we're planning to use visual odometry as a part of mapping and planning systems. We are also working on "place recognition", which will allow the PR2 to recognize where it is when it wakes up, if it's been there before.


Information about visual odometry

Is it possible to see the source code of the visual odometry algorithm?

Yes, all of our source code

Yes, all of our source code is open source and online at The SVN URL of the visual_odometry code is

Issue with the SVN

Hi, The SVN link for visual odometery doesn work. Kindly help

The code is in the process of

The code is in the process of being rewritten and will be re-released when ready. You can find the old code in:

Warning: this is old code, so it will not build with our current releases.