Visual SLAM for ROS

Helen Oleynikova, a student at Olin College of Engineering, spent her summer internship at Willow Garage working on improving visual SLAM libraries and and integrating them with ROS. Visual SLAM is a useful building block in robotics with several applications, such as localizing a robot and creating 3D reconstructions of an environment.

Visual SLAM uses camera images to map out the position of a robot in a new environment. It works by tracking image features between camera frames, and determining the robot's pose and the position of those features in the world based on their relative movement. This tracking provides an additional odometry source, which is useful for determining the location of a robot. As camera sensors are generally cheaper than laser sensors, this can be a useful alternative or complement to laser-based localization methods.

In addition to improving the VSLAM libraries, Helen put together documentation and tutorials to help you integrate Visual SLAM with stereo camera data on your robot. To find out more, check out the vslam stack on ROS.org. For detailed technical information, you can also check out Helen's presentation slides below.