Capturing Accurate Camera Poses

During his internship at Willow Garage, Sebastian Klose, a Ph.D. student from Technology University Munich, focused on integrating visual SLAM with measurements from an inertial measurement unit (IMU). Using a handheld unit with a Microsoft Kinect and an IMU mounted on it, Sebastian wanted to capture 3D maps of the room and objects in it when the camera had noticeable gaps. Visual SLAM usually creates good 3D maps but doesn’t work as well when the camera is pointing at a blank wall or moving too fast.

To bridge the gaps in features, an IMU’s accelerometers and gyroscopes can temporarily track the six-degree (6D) motion of the camera, until new features are visible in the camera image. (Check out the sensors supported by ROS.)

Sebastian used an Extended Kalman Filter (EKF) to track the camera’s 6D pose. The EKF estimates the biases on the IMU measurements based on inputs from the visual SLAM poses. If the camera moves too far from the object, the pose indicates where the camera has moved. Check out the video for details.

The imu_filter stack code is available on This code works with visual SLAM algorithms and can be used with any other input that provides 6D poses.