Learning Proprioceptive Properties

One of the hallmarks in human object perception is our ability to use a wide variety of sensory modalities. In contrast, most robots today rely almost exclusively on vision and/or 3D laser scan data for solving perception tasks. Many object properties (e.g., weight, material type, etc) cannot always be detected using vision alone - for example, visual feedback will not allow a robot to tell the difference between a full and empty bottles that otherwise look the same.

To address these types of problems, Jivko Sinapov from the Developmental Robotics Laboratory at Iowa State explored how proprioceptive sensory feedback, in the form of detected joint motor efforts, can be used by the PR2 for object perception. To use proprioception, the PR2 performed object exploratory behaviors such as lifting, unsupported holding, and sliding an object across the table. The robot learned a recognition model to detect whether an object (e.g., a bottle) is full or empty using features extracted from the joint efforts produced by the robot's behavior on the object. The robot was able to recognize whether a bottle is full or empty by lifting it from a tabletop, as well as by simply holding it in place.

These recognition models were tested with several different tasks. In one task, the PR2 had to solve a sorting task in which only empty bottles are cleared off a table. In another task, the PR2 had to estimate the weight of bottles. A third task -- sliding boxes across a table -- tested how quickly new recognition models could be learned. With minimal training experience (10-20 minutes), the robot was able to learn an accurate model for distinguishing between full and empty boxes.

For more information, please see Jivko's slides below (download pdf) or checkout the proprioception package on ROS.org.