Finding Graspable Features in Sensor Data

Emma Zhang, from the Robotics Lab at the Rensselaer Polytechnic Institute in New York, recently completed an internship at Willow Garage. Emma worked on an algorithm that attempts to identify object locations suitable for grasping using a parallel gripper, locations which we refer to as ``graspable features''.

Using as input point clouds from a depth camera, this method extracts a gripper-sized voxel grid containing a potential graspable feature, and encoding occupied, empty, and unknown regions of space. The result is then matched against a large set of similar grids obtained from both graspable and non-graspable features computed and labeled using a simulator.

Emma's results show that the outcome of the matching process is a good predictor of the quality of the grasp, as evaluated in simulation. We believe that, by operating directly on real-life sensor data and reasoning about missing information as well as sensed object surfaces, the graspable feature evaluation algorithm has the potential to tackle complex and/or cluttered scenes in the context of both autonomous and human-in-the-loop grasping tasks.

For more details, see Emma's presentation below, or check out the graspable_features package on ROS.org.