Willow Garage Blog
Crowdsourcing provides a convenient and increasingly popular method for gathering large amounts of data and annotations. Amazon's Mechanical Turk and CrowdFlower, games such as the ESP Game, and requests for free annotation help such as LabelMe are just a few examples of crowdsourcing efforts. These attempts have taught us many lessons and brought up yet more questions. How can we most effectively elicit the information we need from a distant and potentially anonymous workforce? What kind of workforce is required for different tasks such as user studies and data set labeling? How can we train and evaluate workers?
The 2012 AAAI Spring Symposium on Wisdom of the Crowd will bring together researchers from robotics, user interfaces, games, computer vision, and other disciplines exploring the core scientific research challenges of crowdsourcing. This symposium will seek to facilitate interaction among researchers and work toward formulating a set of guidelines for future crowdsourcing endeavors.
The symposium will be held at Stanford University, March 26-28, 2012.
For more information, including the symposium format and a list of topics, please see the symposium website.
Important Dates & Submission Information
- October 7, 2011 - Submissions due
- November 4, 2011 - Acceptance notification
- January 20, 2012 - Camera-ready submission
- March 26-28, 2012 - Symposium
We invite contributions in the form of full papers (6 pages) and extended abstracts (2 pages).
Additional information is available on the main AAAI Spring Symposium website.
TurtleBot is now part of the robotics section of Make Projects:
In the Robot Roundup section of MAKE Volume 27, we featured the TurtleBot hobby platform as a great reasonably priced open source robotics kit. (Check out our review on page 77.) Now, the fine folks at TurtleBot are sharing project builds on our DIY wiki, Make: Projects. To date, there are eight TurtleBot projects, and there's now a TurtleBot topic page nested under the Robotics category on the site.
Read more on the Make Blog.
Armin made major improvements to the OctoMap 3D mapping library. Scan insertions are now twice as fast as before for real-time map updates and tree traversals are now possible in a flexible and efficient manner using iterators. The new ROS interface provides conversions from most common ROS datatypes, and Octomap server was updated for incremental 3D mapping.
Armin also worked on creating a dynamically updatable collision map for tabletop manipulation. The collider package uses OctoMap to provide map updates from laser and dense stereo sensors at a rate of about 10Hz.
Finally, Armin extended the ideas from collider to allow for navigation in complex three-dimensional environments. The 3d_navigation stack enables navigation with untucked arms for various mobile manipulation tasks such as docking the robot at a table, carrying trays, or pick and place tasks. The full kinematic configuration of the robot is checked against 3D collisions in a dynamically-built OctoMap in an efficient manner. The new planners based on SBPL exploit the holonomic movements of the base.
For more details, please see Armin's presentation below (download PDF) or check out octomap_mapping, collider, or the 3d_navigation stack at ROS.org. There is also a presentation and demo of the system scheduled at the PR2 workshop coming up at IROS 2011. Armin's improvements to OctoMap are part of OctoMap 1.2 as well as the latest octomap_mapping stack.
David Gossow, a recent graduate of the University of Koblenz-Landau and new doctoral student at the Technical University of Munich, visited Willow Garage this spring. David created a new general framework allowing ROS developers to create graphical 3D interfaces to their robot applications, and applied it to building new tools for Human-in-the-Loop robotic manipulation.
David's new framework, called Interactive Markers, allows a ROS application to receive input from a human operator through a compatible client software. It separates the application from the tool used for visualization and user interaction, much like a web application runs independently of the web browser. Interactive Markers offer a wide variety of display and interaction modes, enabling a broad range of new applications within ROS. David also implemented a reference front end for Interactive Markers in rviz, effectively transforming it from a robot visualization tool into an interaction engine. This new front end is a major new feature in the recent ROS Electric release.
David used the Interactive Markers framework to develop new tools for Human-in-the-Loop robotic manipulation. Assistance from a human operator allows a robot to perform complex manipulation tasks even in difficult, unstructured environments. The goal in this framework is to minimize the cognitive effort required on the human side. This can be achieved by taking advantage of the sub-tasks that the robot can perform without assistance. For example, if the operator assists in object recognition, subsequent operations such as grasping and placing can be performed autonomously. When needed, the operator can also get involved at lower levels of task execution, such as specifying grasp poses or even directly operating the robot's gripper. A complete set of tools, based on Interactive Markers, allows an operator to perform all these tasks through the rviz interface.
IROS 2011 Tutorial: Motion Planning for Real Robots
Format: Full day tutorial
Date: Sunday, September 25, 2011
URL: Slides and documentation from the tutorial are now posted on the tutorial website: IROS 2011 Tutorial on Motion Planning for Real Robots.
If you are interested in attending this tutorial, please provide us some more information about you using this form. Filling this form will help us plan the tutorial better. Please note that you will still have to register separately on the conference website for this tutorial when you register for the conference itself.
This full-day tutorial will teach both novice and experienced participants how to setup, configure and use motion
planning on a real robot. Novice users can expect to learn how to set up, configure and execute the perceptual, kinematic,
planning and execution components required for motion planning on an advanced multiple degree of freedom robot.
Expert users will be able to explore the motion planners in more details, focusing on how they can be reconfigured for
particular tasks. The tutorial will be based on a set of tools within the OMPL (Open Motion Planning Library) and
ROS (Robot Operating System) software. The participants will have access to simulated environments and real robots
(the Willow Garage PR2 robots) for a hands-on experience in using motion planning with real robots. The tutorial will
conclude with an examination of case studies based on suggestions from the participants and organizers, highlighting
how the motion planners can be configured for particular robots or motion planning scenarios.
Sachin Chitta and E. Gil Jones
Willow Garage, Inc.
Ioan Sucan, Mark Moll, and Lydia E. Kavraki
08:30–09:00 Overview and Introduction
09:00–09:30 Background on concepts in sampling-based motion planning
09:30–10:15 OMPL and OMPL.app
10:15–10:45 Coffee break
10:45–11:30 Introduction to ROS and connection to OMPL
11:30–12:00 Overview of the simulation environment
12:00-12:10 Live demo
13:30–15:00 Hands-on programming, part I
15:00–15:30 Coffee break
15:30–17:00 Hands-on programming, part II
Slides and documentation from the tutorial are now posted on the tutorial website: IROS 2011 Tutorial on Motion Planning for Real Robots.
Motivation and objectives
Motion planning is easy to understand, yet state-of-the-art algorithms to solve motion planning problems in a general
fashion can be hard to implement. Furthermore, integrating motion planning algorithms in a bigger software system
targeted at specific robots is also challenging. The OMPL library implements many sampling-based algorithms and
makes it easy to integrate with larger software systems and tailor to specific systems. ROS provides a very rich software
infrastructure with perception, kinematics and execution components that can be integrated with planning to create a
complete motion planning and execution pipeline. The tutorial aims to provide a high-level description of the motion
planning algorithms in OMPL coupled with implementation level details on configuring motion planners on real robots.
The tutorial will provide plenty of opportunity for participants to get hands-on experience with solving motion planning
problems in real-world environments, both in simulation and on the robot using real sensor data.
After the tutorial participants should be able to:
- write code to define a configuration space and the control space (if applicable) for a robotic system of interest
- define motion planning queries and solve them with a planning algorithm
- visualize the results
- use the sensor data and environment models that are accessible through various ROS interfaces for motion planning
- solve and execute motion planning queries for the PR2 hardware platform.
The skills obtained in this tutorial are easily transferrable to the rapidly growing list of other robots on which ROS can
run (see http://www.ros.org/wiki/Robots).
We are primarily targeting participants who would like to learn to implement motion planners on real robots using
realtime sensing. Some familiarity with ROS is desired but not essential. This will be a hands-on tutorial, so basic
programming experience in C++/Python is desired. A secondary audience is researchers in motion planning who would
like to build on the tools and components available in OMPL and ROS to create more advanced motion planners. This
tutorial will also be of interest to educators wanting to use a stable, well-featured software tool for teaching motion
The 2011 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO) will explore applied robotics research in the context of social impact and related commercial and legal issues. Participants will be world class robotics researchers, investors, and industry and government representatives. This year’s workshop will be held immediately after the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2011 (held in San Francisco) in the Ritz-Carlton Resort, Half Moon Bay in California. The workshop will focus on discussions and developing collaborations, serving as a venue to share our vision for the future of robotics and its impact on society.
For more information regarding the schedule and registration please visit the conference website.
ROS Electric Emys is now officially available! This amped up, current release of ROS features new, stable libraries for arm navigation and point cloud processing. There is also improved support for Android, Arduino, Windows, ARM, and Python 3. Please see the release page for some of the cool new tools and libraries in Electric, such as "interactive markers" for creating custom GUIs in RViz.
Many thanks to the ROS community for making this release possible. In order to better recognize your contributions, we have started an authors and contributors list. The numerous libraries, features, and bug fixes that you have provided enable ROS to run faster, better and on more platforms. We also appreciate the many contributors on answers.ros.org who make using ROS a better experience for everyone.
Please see the ROS Electric Emys page for more information on what's new in Electric, how to migrate from Diamondback, and how to download the release.
It is with great pride that we inform you that Willow Garage's Brian Gerkey has been named to MIT Technology Review's TR35 Listing of the World's Top Young Innovators for 2011. The TR35 recognizes the world's top innovators under the age of 35, spanning energy, medicine, computing, communications, nanotechnology, and other emerging fields. Brian has been honored for his track record of building open source communities around robotics software and was selected as a member of the TR35 class of 2011 by a panel of expert judges and the editorial staff of Technology Review, who evaluated more than 300 nominations.
Brian will join other TR35 honorees in discussing their achievements at the emtech MIT 2011 conference, taking place at the MIT Media Lab in Cambridge October 18-19, 2011. All of the 2011 TR35 winners will be featured the September/October issue of Technology Review and (online here)[www.technologyreview.com/tr35/]
Brian is the Director of Open Source Development at Willow Garage and oversees development of popular open source projects like OpenCV, ROS, PCL and Gazebo, as well as software for the PR2 and TurtleBot platforms. Brian is founding and co-lead developer on the open source Player Project, which produces one of the most widely-used software platforms for robotics research and education. He is a strong believer in, frequent contributor to, and constant beneficiary of open source software. In particular, Brian has played a pivotal role in the design of ROS, which was a community project initiated by Willow Garage and Stanford University. Before joining Willow Garage, Brian was a Computer Scientist in the SRI Artificial Intelligence Center, and a postdoctoral scholar in the Stanford Artificial Intelligence Lab. Brian’s research has covered a variety of areas, including mobile manipulation, multi-robot coordination, outdoor robot navigation, and computational geometry.
In the words of Technology Review's editor-in-chief Jason Pontin: "Technology innovation is key to driving growth and progress in the areas of research, medicine, business and economics. This year's group of TR35 recipients is driving the next wave of transformative technology and making an impact on the way we live, work and interact."
In our own words, we couldn't be more proud of Brian's achievement. His hard work, dedication, insight and enthusiasm are an inspiration to all of us at Willow Garage and are helping to bring Willow Garage's vision of personal robotics closer to fruition. Take a bow, Brian…
We recently featured Bosch and how they are working on making robots cheaper, more capable, and safer as part of the PR2 Beta Program. They now have a new trick up their sleeve: in one week, they created an website where employees can order drinks and have them delivered to them automatically. They have PR2 (Alan) and TurtleBot (BusBot) working together so that PR2 gets the drinks out of the fridge and TurtleBot brings them to your room.
This multi-robot twist on drink delivery is a great approach for both making robots cheaper and more capable. More expensive and capable robots like PR2 can be used for difficult tasks that require manipulation, while less expensive robots like TurtleBot can take care of transporting items around.
For more information, please see the IEEE Spectrum Automaton article.
Today we are announcing the availability of a new PR2 model, the PR2 SE. As you can see from the accompanying image, there's something quite distinct about this model. The PR2 SE is a single-armed robot with an updated sensor suite.
Particularly in light of such new programs as the NSF National Robotics Initiative, we've been encouraged to offer our personal robot platform at a more affordable price. The PR2 SE is priced at $285,000. As with the dual-armed PR2 model, an additional 30% discount is offered to individuals with a proven track record of contributions to the open source community.
While more limited in capability than its dual-arm sibling, we expect the PR2 SE will increase the market by allowing more scientists and engineers to explore the innovative capabilities for personal robots at a much faster pace.
NOTE: the updated sensor suite for the PR2 SE is still being finalized.