We present a novel algorithm for estimating the orientation of objects from 3D point cloud data. Pose estimation (of which obtaining the orientation is typically the hardest part) is extremely relevant for robots performing mobile manipulation tasks—in order to grasp an object, the robot must know how the object is situated in space. Our method uses informative local surface features that capture the geometry accurately to vote for object orientation. While the geometry of an individual surface feature may not uniquely determine the object’s orientation, it can often provide enough information to constrain the set of possible orientations, so that combining many such (soft) constraints can lead to a correct orientation estimate. At each local feature we attach a local coordinate system, and the space of all possible orientation estimates is encoded using probability distributions called Bingham Mixture Models (BMMs) on the 4D hypersphere of unit quaternions, which correctly capture the topology of the 3D rotational space.

Our algorithm is robust to changes in visual appearance (color, texture, lighting), as well as occlusions. Unlike previous voting methods, such as the generalized Hough transform or geometric hashing, our method uses probabilistic inference of BMMs to compute a parametric posterior distribution over object poses. It is therefore extremely fast, since no search of parameter space is required.

%8 07/2011