3D Pose Estimation, Tracking and Model Learning of Articulated Objects from Dense Depth Video using Projected Texture Stereo

Title3D Pose Estimation, Tracking and Model Learning of Articulated Objects from Dense Depth Video using Projected Texture Stereo
Publication TypeConference Paper
Year of Publication2010
AuthorsSturm, J├╝rgen., Konolige, Kurt., Stachniss, Cyrill., and Burgard, Wolfram
Conference NameProc. of the Workshop RGB-D: Advanced Reasoning with Depth Cameras at Robotics: Science and Systems (RSS)
Date Published06/2010
Conference LocationZaragoza, Spain
Keywordsperception
Abstract

Service robots deployed in domestic environments
generally need the capability to deal with articulated objects such as doors and drawers in order to fulfill certain mobile manipulation tasks. This however, requires, that the robots are able to perceive articulated furniture objects such as cupboards, dishwashers and cabinets. In this paper, we present an approach for detecting, tracking, and learning 3D articulation models for doors and drawers without using arti?cial markers. Our approach uses a highly efficient and sampling-based approach to rectangle detection in dense depth images obtained from a self-developed projected texture stereo vision system. The robot can use the generative models learned for the articulated objects to estimate their mechanism type, their current configuration, and to predict their opening trajectory. In our experiments we demonstrate that (1) we obtain dense depth images in the workspace of our robot using our camera system, (2) we are able to robustly and reliably detect cabinet fronts from depth
images, and (3) are able to learn accurate articulation models for the observed articulated objects. We furthermore provide a detailed error analysis based on ground truth data obtained in a motion capturing studio.

AttachmentSize
sturm10rssws[1].pdf953.56 KB