Advancements In Active Grasping
During his internship at Willow Garage, University at Southern California student Jon Binney worked on improving the way robots grasp. He taught them how to improve their knowledge of where and what an object is by feeling for it with their grippers.
Robot grasping is often poorly coupled with object recognition. Typically, an object recognition algorithm is run to find the "most likely" object identity (shape) and pose, then it must execute a grasp motion relative to that pose. This approach is unstable because the pose provided by object recognition often has some amount of error (which is further exacerbated by imperfect calibration between the robot's cameras and grippers), which can cause the grasp to fail, and also because object recognition sometimes misidentifies the object entirely.
To overcome this, Jon implemented a system that uses the pose provided by object recognition as the center of a distribution of possible object poses and also considers other hypotheses for the object's shape. As the robot attempts to grasp the object, it uses measurements from tactile sensors on the gripper as well as the robot's ability to sense objects, to update the distribution over object poses and shapes. This includes information about where the object is and where the object is not through tactile feeling. If at any point the current grasp being executed seems unlikely to work (based on the updated object pose distribution), the robot backs off and picks a new grasp to try.
Code created for the project can be downloaded here.