IROS 2009 workshop: Semantic Perception for Mobile Manipulation

IROS 2009 Workshop: Semantic Perception for Mobile Manipulation

When: 15 October 2009

Where: International Conference on Intelligent Robots and Systems (IROS) 2009, St. Louis, MO, USA


  • 09:00 -- 09:15 / Welcome/Opening remarks (organizers)
  • 09:15 -- 10:00 / Invited talk - Gary Bradski (Willow Garage / Stanford University)
  • 10:00 -- 10:30 / coffee break
  • 10:30 -- 12:00 / Paper session I (3 papers, 20+10 each)
    • Changhyun Choi, Jacob Huckaby, John G. Rogers III, Alexander J. B. Trevor, James P. Case, and Henrik I. Christensen - Towards Semantic Perception for Mobile Manipulation
    • Michael Beetz, Nico Blodow, Ulrich Klank, Zoltan Csaba Marton, Dejan Pangercic, and Radu Bogdan Rusu - Perception for Mobile Pick-and-Place in Human Living Environments
    • Alvaro Collet, Manel Martinez, and Siddhartha S. Srinivasa - MOPED: Fast and robust object recognition and full pose estimation for mobile manipulation
  • 12:00 -- 14:00 / lunch break
  • 14:00 -- 15:30 / Paper session II (3 papers, 20+10 minutes each)
    • Hai Nguyen, Travis Deyle, Matt Reynolds, and Charles C. Kemp - PPS-Tags: Physical, Perceptual, and Semantic Tags for Autonomous Mobile Manipulation
    • Kei Okada, Kimitoshi Yamazaki, Ryohei Ueda, Shunichi Nozawa, and Masayuki Inaba - Perception based error recovery system for daily assistive robots
    • Taylor Bergquist, Connor Schenck, Ugonna Ohiri, Jivko Sinapov, Shane Griffith, and Alexander Stoytchev - Interactive Object Recognition Using Proprioceptive Feedback
  • 15:30 -- 16:00 / coffee break
  • 16:00 -- 17:00 / Paper session III (2 papers, 20+10 minutes each)
    • Morgan Quigley, Quoc Le, Ellen Klingbeil, Andrew Y. Ng - The STAIR Project: Efforts Towards Integrative AI in Robotics
    • Ulrich Hillenbrand - Towards Shape Understanding through Non-Parametric Shape Warping
  • 17:00 -- 17:30 / Panel discussion
  • 17:30 -- 17:40 / Closing remarks (organizers)


Submission deadline: 15 July 2009

As autonomous mobile manipulation will soon move away from individually set up manipulation experiments and begin to tackle real-world everyday manipulation tasks, such as setting the table or cleaning up, the perception capabilities of the robots must become much more powerful. For example, if the robot is asked to bring a glass it must not bring a dirty one or one that is intended for the use of somebody else. More generally, perception for mobile manipulation must become a resource for the robot, which informs the robot with respect to what to do to which object and in which way. This is the main issue of semantic perception for mobile manipulation.

In this full-day workshop we will try to analyze the requirements of such a perception system, and discuss alternatives for achieving this goal, by bringing together researchers from the Computer Vision, 3D Mapping, and Mobile Manipulation and Grasping areas. As an immediate course of action, we plan to make available during the workshop a complete database of 3D object models and scenes for mobile manipulation scenarios.

The workshop will be organized as a full-day workshop, with three paper sessions, and one poster session. The papers will be sorted into sessions based on their common research theme. After each paper session, there will be a brief panel discussion which aims to involve significant audience participation.

This workshop is also part of a broader movement of multiple communities toward mobile manipulation as a key interdisciplinary research topic. We encourage participation in other mobile manipulation meetings, such as the IJCAI 2009 Mobile Manipulation Challenge and the RSS 2009 Workshop on Mobile Manipulation in Human Environments.


We solicit paper submissions, optionally accompanied by a video, both of which will be reviewed (not double-blind) by the program committee. The review criteria will be: technical quality, significance of system demonstration, and topicality. We aim to accept 9–12 papers for oral presentation at the meeting. Videos will be shown during an afternoon session open to the public.

Accepted papers and videos will be assembled into proceedings and distributed in CD format at the workshop. If there is sufficient interest, we will pursue publication of a special journal issue to include the best papers.

The list of topics of interest includes, but are not necessary limited to:

  • 3D object recognition
  • semantic scene interpretation based on point clouds
  • object modelling for manipulation and grasping
  • accurate 3D collision models
  • surface reconstruction for close-range scenes
  • vision for manipulation
  • deformable objects
  • object classification based on manipulation experience/capabilities

Papers should be in PDF, conform to the IEEE requirements, and be maximum of 8 pages in length (shorter papers are welcome). Videos should be in the MPEG format, 3-5 minutes in length, and easily viewed with free video players (please try playing your video on a couple of different machines before submitting).

Email submissions to: Please do not attach video files to email; include a URL instead.

Important Dates:

  • (15 July 2009) 07 August 2009: Submissions due
  • (01 August 2009) 21 August 2009: Notification of acceptance
  • (15 August 2009) 31 August 2009: Final papers due


Program Committee:

  • Peter Allen, Columbia University, USA
  • Darius Burschka, Technische Universitaet Muenchen/German Aerospace Center (TUM/DLR), Germany
  • Gordon Cheng, Technische Universitaet Muenchen, Germany
  • Henrik Christensen, GeorgiaTech, USA
  • Matei Ciocarlie, Columbia University, USA
  • Trevor Darrell, Berkeley University, USA
  • Brian Gerkey, Willow Garage, USA
  • Chad Jenkinks, Brown University, USA
  • Charlie Kemp, GeorgiaTech, USA
  • Danica Kragic, Royal Institute of Technology (KTH), Sweden
  • Norbert Krueger, Maersk Mc-Kinney Moller Institute, Denmark
  • David Lowe, University of British Columbia, USA
  • Kei Okada, University of Tokyo, Japan
  • Morgan Quigley, Stanford University, USA
  • Silvio Savarese, University of Michigan, USA
  • Sidd Srinivasa, Intel Research Pittsburgh/Carnegie Mellon University, USA


Robot logo based on the work of Marius Sucan. The workshop is partly supported by Willow Garage, and the CoTeSys excellence cluster.