Enabling robots to see better through improved camera calibration
During his internship at Willow Garage, Pablo Speciale a masters student from Vibot worked on allowing robots to better perceive their environment by improving calibration across multiple RGB cameras. Proper calibration allows robots to accurately interact with their environment by combining measurements from different cameras in an optimization process.
In this method, calibration is obtained by minimizing the reprojection error with a non-linear solver (in our case, Ceres Solver). The approach starts off with the assumption that one camera is already calibrated with respect to the robot. Next, an initial calibration from the robot model is taken. This solution is able to estimate the best relative position between cameras by taking measurements of 2D patterns, in our case, a checkerboard that’s held by the PR2. By using this process, proper calibration is achieved when all points move as a rigid entity during movement of the robot and its joints.
The goal of this project was the creation of a ROS calibration package for multiple cameras, such as RGB cameras, Microsoft Kinect, and Prosilicas (high-definition cameras). Thanks to Vincent Rabaud and David Fofi for assisting with this project.
Please visit the following links for more information on this project and related past work:
Github repository (in development)
Original calibration work: www.ros.org/wiki/calibration