Mobile Manipulation Challenge 2010
2010 Mobile Manipulation Challenge
May 2010 - Anchorage, AK
Download the Call for Participation: [pdf]
Download the Schedule and Participants List: [pdf]
Where and when: All robot demonstrations will take place in the Idlughet 3 Hall on the first floor of the Dena’ina Center. Individual Team Spotlights will take place as described below. In addition, teams will be showcasing (and working on) their robots throughout the conference duration (May3rd - May5th).
PR2 Team (Willow Garage Inc.): The PR2 is a two-armed robot with an omni-directional base. It has an extensive sensor suite useful for mobile manipulation, including a tilting laser scanner mounted to the head, two sets of stereo cameras, monocular forearm cameras, etc. We will show the PR2 grasping and moving both known and previously unseen objects, using its stereo cameras for object detection, its tilting laser for collision avoidance in an unstructured environment and its tactile sensors for error correction. Team Spotlight: Wednesday May 5th, 14:15 – 15:00
NimbRo Team (University of Bonn): Our domestic service robot, the home assistant Dynamaid, offers drinks and snacks to human guests. A guest chooses a snack by a pointing gesture or by simply ordering it using speech input. Dynamaid searches for the object and grasps it. With the object in hand, Dynamaid returns to the guest and delivers the ordered object. In addition, the guest can order a new drink, e.g., by showing an empty drink to the robot. Dynamaid will recognize the object and will fetch a new one for the guest. Dynamaid will also clean up the room, e.g., the table, by safely navigating around obstacles and collecting the objects that need to be cleaned up. Team Spotlight: Wednesday May 5th, 15:00 – 15:45
homer@UniKoblenz Team (University of Koblenz-Landau): Our robot collects a set of items from the floor and a table and places them in a box. We demonstrate the combined use of a gripper and a robotic arm to pick up the objects. Object and obstacle detection is performed on 3D laser scans. A path planner based on motion primitives is used for collision-free grasping. Our grid-based online SLAM algorithm allows for safe navigation and autonomous exploration of the environment. The algorithms employed are being visualized during the demonstration. Team Spotlight: Thursday May 6th, 14:15 – 15:00
Care-O-bot Team (Fraunhofer IPA): Care‐O‐bot®3 is a case study for future service robot platforms. The design guideline was the creation of a state‐of‐the‐art service robot based on commercially available industry products. It is targeted as a tool for research in the field of Human‐Robot Interaction under real‐life conditions and as demonstrator for applications and new algorithms. We will demonstrate our work in the fields of navigation and undercarriage‐control, as well as object recognition, environment reconstruction and manipulation. Team Spotlight: Thursday May 6th, 15:00 – 15:45
The robots that will be demonstrated are shown below, from left to right: Dynamaid (University of Bonn), PR2 (Willow Garage), homer (University of Koblenz-Landau) and Care-O-bot (Fraunhofer).
The 2010 Mobile Manipulation Challenge was made possible by generous support from Willow Garage Inc., Intel Labs Pittsburgh, Intel Labs Seattle and iRobot.
The 2010 Mobile Manipulation Challenge will be held in conjunction with the IEEE Intl. Conference on Robotics and Automation, Anchorage, AK, May 2010. Participation at the Challenge will consist of a hardware demonstration of a robot solving one of the pre-defined Challenge tasks. This demonstration must be performed on-site, in the environment set up for the Challenge.
The goal of the Challenge is to provide a snapshot of the state of the art in mobile manipulation, while encouraging collaboration and dissemination of ideas. The key components of the Challenge are:
- challenge events: the high-level applications that will be demonstrated at the Challenge, described in more detail later in this document. This year's events include cleaning up a room, loading a dishwasher, and playing board games. It is important to note that all events are described at a relatively high level, and strict adherence to the description is not mandatory. Teams have the option of relaxing certain components of a task, e.g. by placing fiducial markers in the environment, restricting the manipulated objects to a pre-defined set, etc. The goal is to encourage participation by teams wishing to demonstrate a high skill level in some, but not necessarily all of the components of a complex mobile manipulation task.
- letter of participation: participants should submit a written description of the intended entry for the Challenge before the deadline listed in this document. The letter of participation must include a description of the hardware platform, a high-level description of the algorithms to be used, a list of all assumptions / relaxations that will be employed, and pointers to the team's relevant recent work in this field. In the case where the number of entries will exceed the hosting capabilities, the Challenge Technical Committee will review the submissions and announce the teams accepted for participation.
- challenge format: we see this event as a challenge, and not as a competition. There will be no formal ``winners'', and, for that matter, no official ranking of the participating teams. We expect the teams to showcase their results at the Challenge, and show reliable performance in a live demo setting. Every participant that can show a level of performance at or above the description stated in the letter of participation is, for our purposes, a challenge winner.
- challenge report: all the teams that successfully participate in the challenge will be invited to contribute an article to a published report centralizing all the results achieved. This report will also contain the key approaches and algorithms used at the challenge. The current intention is to provide a special issue of one of the leading journals in the field dedicated to this event; the technical and administrative details regarding the publication venue are still under development. All the teams that successfully participate in the challenge will be invited to submit a report, which will then be peer-reviewed for inclusion in this special issue. Our aim is for a complete challenge report as a snapshot of the state of the art in mobile manipulation as it relates to our challenge events, complete with key research insights, implementation details and results.
The important dates for the Challenge are:
- December 1, 2009: submission of letter of participation
- December 15, 2009: notification of participation decision from the Challenge Technical Committee
- May 3-8, 2010: Challenge Events
Travel support may be possible for selected participants and their hardware, depending on available funds and level of demand. If you wish to participate in the Challenge, but believe you will require travel support, we encourage you to submit a letter of participation. Travel support decisions will be made as soon as all the submissions are reviewed.
For any additional information, please contact Matei Ciocarlie (firstname.lastname@example.org) or Bill Smart (email@example.com).
Challenge Technical Committee:
- Monica Anderson - University of Alabama
- Matei Ciocarlie - Willow Garage
- Brian Gerkey - Willow Garage
- Kaijen Hsiao - Willow Garage
- Chad Jenkins - Brown University
- Radu Bogdan Rusu - Willow Garage
- Dave Touretzky - Carnegie Mellon University
- Bill Smart - University of Washington St. Louis
The conceptual description of the challenge that we propose this year is a constrained pick-and-place task. The focus of the challenge is the ability to acquire objects from the environment, transport them to a desired location and place them in a given configuration. The current format does not require the ability to perform "in-hand manipulation", or change the pose of the object in the hand without breaking the grasp. We believe that future challenges will indeed elicit true dexterity from participating robots. The current format however, while restricting the range of possible applications, is intended to allow participation using relatively simple end-effector designs.
Event I: Object Retrieval
Event description: the robot must clear up a room that has been used as a toddler's playground, by retrieving all the toys and placing them in a large box. Apart from the toys and the target box, the room contains a number of obstacles, such as a table and chairs.
Formal task specification: all objects of manipulable size are considered targets, to be retrieved from the environment. All the targets must be placed inside a box, large enough to comfortably hold all of them and identified by a distinctive color. The only requirement is that the objects must end up inside the box, no finer tuned positioning is required. In addition to the targets, the environment will contain a four-person table and four chairs which are to be considered obstacles.
Constraints: the target objects will be divided into two categories. The first one will contain only box-shaped targets, such as Lego blocks. The size of any block will be between 1cm and 5cm along its shortest dimension, and between 5cm and 20cm along its longest dimension. The second set of objects will contain a number of toys, both deformable (e.g. plush) and rigid. A subset of these will exceed in size 30cm along their largest dimension. However, they will have a number of graspable "features" which will be graspable by a standard sized hand. All the objects will be limited in weight to less than 500g.
The objects will be scattered around the room, requiring navigation around the obstacles for task completion. No objects will be placed under the obstacles. A subset of the obstacles will be placed on the table. The height of the table will not be pre-specified. However, it will be adjusted to the dimensions of each participating robot, in order to insure that it reachable.
Event II: Loading a Dishwasher
Event description: the robot must clear a dining table and place all the dishes in a dishwasher tray. Not all the objects on the table will be dishes, requiring the ability to discriminate between what must be placed in the dishwasher and what must not. The environment will also contain a number of chairs around the table which will act as obstacles.
Formal task specification: the environment will contain the following: an open dishwasher (or a dishwasher tray), a two-person table and two chairs. On the table there will be an unspecified number of plates, glasses and silverware items. All of the items belonging to these categories are to placed in the dishwasher. There will also be a number of items that must not be placed in the dishwasher, belonging to one of two families: cell
phones and remote controls.
The chairs are to be considered obstacles. The distribution of the dishes on the table will be such that they are not all accessible from a single location, and navigation around the table (and thus around the obstacles) is required.
Constraints: the dishwasher tray aspect and dimensions will be specified in advance, together with its maker and model. It will begin the task in the ``open'' position, and the robot will not be required to open or close it. The tray will provide ample space for all the items that must be placed therein. The location of the dishwasher relative to the table will not be specified in advance.
All the plates, mugs and silverware will also be pre-specified, along with their makers and models. The height of the table will not be pre-specified. However, it will be adjusted to the dimensions of each participating robot, in order to insure that it reachable.
Event III: Manipulation Open
This event is intended to allow teams to demonstrate a high level of performance in a mobile manipulation task typically performed in an indoors human environment, but not covered by the other events. The
Letter of Participation must include a description of the task, the high level application that it can be a part of, and make the case for its relevance to the field. It is important to note that the necessary components of the environment must be replicated in our on-site Challenge setting. The Letter of Participation must indicate precisely the required environment. Any part of the setup that can not be provided by the organizers must be supplied by the team itself.
Environment and Constraints
We encourage teams to attempt to solve the tasks outlined above using as little pre-defined information as possible, other than what is contained in the description of the events themselves. However, this
is not a hard constraint: if needed, teams may include additional information, like the examples in the list below:
- fiducial markers to establish the location of certain targets
- color-coded objects to provide discrimination between manipulation targets and obstacles
- manipulated objects belonging to a pre-defined set, for which additional data (e.g. 3D meshes, camera images, etc) is available. In this case, teams must supply their own object sets.
- any other additional information that teams deem necessary.
Please note that any such additional constraints must be described in the letter of participation. The key aspect to consider is that the amount and nature of external constraints and information that are used in a demonstration will be taken into account by the Technical Committee, as well as the audience of the Challenge, both on-site and through the published Challenge Report.
The ICRA 2010 Mobile Manipulation Challenge will also be hosting the first leg of the 2010 Small Scale Manipulation Challenge: Chess (the second leg of which will be hosted at AAAI 2010 in Atlanta, GA). This event is designed for smaller hardware platforms (sometimes referred to as table-top robots), as a number of excellent small-scale robots with manipulation capabilities are currently in use in the community, either as teaching or research platforms.
Please note that this event will be organized as a tournament. The complete set of tournament rules, together with the organization details and contact information are available at http://aaai-robotics.ning.com/forum/topics/icra2010-and-aaai2010.
This section provides some more detail into the thought process behind the Challenge; it is not directly relevant for participation. However, any comments or discussion, both for this year and future edition,
will be appreciated.
What is the current state of the art in mobile manipulation? It is difficult to find an encompassing answer to this question, or rather to distill the multitude of answers that have emerged from the
research community into a complete snapshot. The difficulty of providing a unitary view of the field stems from multiple sources, some of the most commonly discussed being:
Task complexity: the intrinsic nature of mobile manipulation is that it combines a large number of sub-tasks, many of them active research areas in their own right. Consider first the "manipulation" aspect: even in a simplified approach, it still implies the ability to recognize the target object, segment it from the background and compute and execute the desired grasp, before we can begin to discuss the desired use of the object once acquired. The deceptively simple "mobile" qualifier further implies the ability to localize oneself, identify the targets from a distance, plan and navigate a path to the target etc.
A direct result is that few research groups have the ability to tackle all of these tasks and to produce a complete mobile manipulation system. The scientific approach teaches us to divide the problem into more tractable sub-problems, and many valuable results have been reported in all of the areas listed above. However, such individual solutions present more challenges when they have to be assembled into a complete system, which has, with a few notable exception, prevented the dissemination of results for a complete mobile manipulation task.
Hardware requirements: the hardware complexity of a mobile manipulation platform presents a double challenge. First, it makes for a high barrier of entry in the field, as the assembly of a complete platform exceeds the resources of many research groups. Second, it increases the difficulty of sharing results, as replicating another group's hardware can be difficult or even impossible, while software components are often tied to the hardware they were designed for.
One potential solution is to standardize the hardware platform used by multiple research groups, allowing for direct comparisons and inter-operability of the software components. This direction will undoubtly prove fruitful in the long run; however, designing such a platform and distributing it to multiple groups is a non-trivial and potentially expensive task. In the meantime, comparing different hardware platforms has its own advantages. In particular, it also recognizes groups that choose to invest in researching more "intelligent" hardware, allowing the hardware design to shoulder part of the load that traditionally was left to high-level algorithms.
In a nutshell, the Mobile Manipulation Challenge aims to address some of these problems by
- providing a standardized environment and task description for mobile manipulation platforms
- bringing together multiple teams with expertise in this field
- publishing a collection of detailed reports from the participating teams, with key scientific insights used and results achieved.
In the absence of standard hardware or software solutions, we believe that a common operating environment and task description will provide a reference point for different platforms, enabling the cross-pollination of ideas. The lack of mandatory guidelines, other than a high-level task description, will allow complete freedom of exploring both hardware and software solutions, non-standard components, etc.
The high level goal of the Challenge is to provide a snapshot of the state of the art for a number of autonomous mobile manipulation applications. We believe that such a benchmark can serve as a valuable reference point in the development of more complex robotic applications that manipulation is a part of. Examples include, but are not limited to, areas with high social impact, such as humanoid robots, service robots for house care, etc. The results of the 2010 event can also be applied towards future Challenges, by providing a reference implementation and/or a plug-and-play execution environment for solving common manipulation tasks.