IROS 2010 Workshop: Defining and Solving Realistic Perception Problems in Personal Robotics

IROS 2010 Workshop:  Defining and Solving Realistic Perception Problems in Personal Robotics

When: October 18, 2010

Where: International Conference on Intelligent Robots and Systems (IROS) 2010, Taipei, Taiwan



09:00 - 09:10 Welcome / Opening remarks (by Organizers)
09:10 - 10:00 TOD: Textured Object Detection using Features2D in OpenCV
Gary Bradski, Willow Garage/Stanford University
10:00 - 10:20 Coffee break
10:20 - 10:50 NARF: 3D Range Image Features for Object Recognition
Bastian Steder, University of Freiburg
Presentation, Paper
10:50 - 11:20 Scene Representation and Object Grasping using Active Vision
Jeannette Bohg, KTH
Presentation, Paper
11:20 - 12:00 Object Weight Classification Using Proprioceptive Feedback
Kaijen Hsiao, Willow Garage
12:00 - 13:40 Lunch break
13:40 - 14:10 Modeling the World Around Us :: An Efficient 3D Representation for Personal Robotics
Kai Wurm, University of Freiburg
14:10 - 14:40 Single View Categorization and Modelling
Dejan Pangercic, TUM
14:40 - 15:20 Automatic 3D Reconstruction and Modeling
Benjamin Pitzer, Bosch Research
15:20 - 15:40 Coffee break
15:40 - 16:10 Bundle Adjustment techniques for Reconstruction
Kurt Konolige, Willow Garage/Stanford University
16:10 - 16:55 Invited talk: Size matters: 2.1D representations for recognition and grasping
Trevor Darrell, UC Berkeley
16:55 - 17:25 Panel discussion (led by Gary Bradski)
17:25 - 17:30 Closing remarks (by Organizers)


Important Dates:

  • Submissions Due: July 23, 2010
  • Notification of Acceptance: August 1, 2010
  • Final Papers Due: August 15,  2010
  • Workshop at IROS: October 18, 2010


As personal robotics platforms, such as the Willow Garage PR2, become increasingly available, there will be an emphasis in the research community on creating algorithms that are successful in the real world.  Many robotics problems, such as planning or grasping, are currently addressed in simulation, where the state of the world is known. However, in the physical world, the assumption of a known world model falls apart and perception becomes a serious bottleneck. In this workshop, we aim to explore perception problems whose definition is general enough to be practically useful, but specific enough that the problem can be solved today. For example, detecting and registering objects on a planar support, assuming reasonable object separation and a small set of models known a priori, would enable object manipulation research. We solicit papers that describe useful perception problems that facilitate robot behaviors, and their proven solutions.


We solicit paper submissions, optionally accompanied by a video, both of which will be reviewed (not double-blind) by the program committee. The review criteria will be: technical quality, significance of system demonstration, and topicality. Each paper should have an explicit problem statement including constraints, and describe the robot behavior that the solution to the problem would enable.  Each solution should be made reproducible by others, either via in-depth explanation or by making source code available. Finally, the solution should be tested and shown to actually solve the problem. Videos will be shown during the workshop.

Accepted papers and videos will be assembled into proceedings and distributed in CD format at the workshop. If there is sufficient interest, we will pursue publication of a special journal issue to include the best papers.

The topics of interest include, but are not limited to:

  • 3D object recognition
  • semantic scene interpretation based on point clouds or image information
  • object modelling for manipulation and grasping
  • accurate 3D collision models
  • surface reconstruction for close-range scenes
  • vision for manipulation
  • deformable objects
  • object classification based on manipulation experience/capabilities
  • perceiving people
  • recognizing failure in any of the above areas to enable recovery actions.

Papers should be in PDF format, conform to the IEEE requirements, and be maximum of 8 pages in length (shorter papers are welcome). Videos should be in the MPEG format, 3-5 minutes in length, and easily viewed with free video players (please try playing your video on a couple of different machines before submitting).

Email submissions to: Please do not attach video files to email; include a URL instead.


Program committee:

  • Rosen Diankov, Robotics Institute, CMU, USA
  • Dieter Fox, University of Washington, USA
  • Charlie Kemp, GeorgiaTech, USA
  • Lorenzo Natale, IIT, Italy
  • Andreas Nuechter, Jacobs University Bremen, Germany
  • Giorgio Metta, IIT/University of Genova, Italy
  • Morgan Quigley, Stanford University, USA

Robot logo based on the work of Marius Sucan