Willow Garage Blog

July 19, 2010

Indoor Mapping Outdoor Mapping Learning by demonstration Collaborative Moving Laundry Eldercare Cognitive robotics Tidying up Path planning with dynamic obstacles Vacuuming Teleoperation Physical search box Trophy  Vacuuming Party

One of the problems we have when creating presentations or other materials for robotics is the availability of artwork to illustrate the topics we're discussing. That's why we're making available all of the great artwork that Josh Ellingson did for us in our "PR2 Beta Sites Spotlight" series available with a Creative Commons (NC, BA) license. We've uploaded these illustrations to Flickr, where you can grab higher resolution versions.

If you like this art, checkout Josh's site for more, or stay tuned here.

Gallery

July 14, 2010

The first CoTeSys-ROS Fall School on Cognition-Enabled Mobile Manipulation will be held from November 1-6, 2010 at Technische Universität München in Munich, Germany. The school will be organized by the Cluster of Excellence Cognition for Technical Systems, the Intelligent Autonomous Systems Group at TU München, and Willow Garage.

The CoTeSys - ROS Fall School on Cognition-Enabled Mobile Manipulation will introduce the participants to the exciting research area of autonomous mobile manipulation. Introductory lectures will be given by world-renowned experts in the fields of 2D/3D perception, learning, reasoning and planning. The lectures will be complemented with hands-on practical exercises using various modules of the open-source ROS framework. Many of the principal developers will be present and available for support and interaction. The application domain for the fall school will be "everyday manipulation in human living environments".

The school has a strong hands-on emphasis. The goal to be pursued throughout the whole week will be to build a practical application on high-end robots (e.g. TUM-Rosie, the PR2) that will give robots the power to perceive and interpret typical household scenes, reason about them to infer the subsequent courses of action, and to finally execute them in the form of mobile manipulation.

For more information and registration, click here.

July 14, 2010

Map

Mobile Manipulation in Human-Centered Environments

MIT's primary goal is to develop some of the fundamental technologies that will enable robots to inhabit complex, dynamic human environments for long periods of time.

Imagine the PR2 living in the Stata Center at MIT.  This is a 720,000 sq ft building full of curving corridors, dead ends, glass walls, cluttered labs and almost a thousand people.  Suppose the PR2's task is to find and deliver equipment from one lab to another, and to be able to do this over months and years.  What does it take to achieve this?

Navigation: The robot needs to build and maintain enormously complex maps that it can use to find its way through a very complex and dynamic environment -- the furniture moves, the interior walls are re-arranged, new passages are made and others blocked.  The robot needs to know where to find particular things and people, even though their locations change with time.

Perception: The robot must be able to recognize an enormous variety of objects: computers, keyboards, desks, staplers, wrenches, glasses, plants, left-over chinese food containers...and other robots.

Manipulation: The robot needs to be able to grasp and manipulate arbitrary objects that will often be in cluttered environments -- in a cupboard, in a drawer, or on a cluttered table.  The robot will need to move items out of the way to see its target object.  Objects will have to be slid, tilted and up-ended as well as stacked and put away.

Communication: The robot needs to communicate with people in human language to find out what needs doing.  People will need to tell the robot where to find people and things.

Planning: The robot needs to plan how to achieve the goals it has been given using its capabilities of navigation, perception and manipulation.  Which route should it take to the motion-capture room?  Which object should be moved first to find the multimeter in the cabinet?

The MIT team is working on pushing the state-of-the art in all of these areas and hopes to have the PR2 navigating and manipulating in the Stata Center in the near future.

Team photoThe Team:

The MIT team is based in the Computer Science and Artificial Intelligence Laboratory (CSAIL), the largest interdisciplinary research lab at MIT.  The team has members from the Department of Electrical Engineering and Computer Science, the Department of Mechanical Engineering, and the Department of Aeronautics and Astronautics.

Presentation

Below is a video of MIT's team presenting their proposal to the rest of the PR2 Beta Program participants. You can download the slides as PDF.

July 13, 2010

Stefan Holzer, a PhD student at TUM, spent the past two and a half months enabling the PR2 to detect objects based on visual information. The goal was not only to detect the objects within a scene, but also to get useful information about the pose-- position and orientation-- of the detected object in order to simplify tasks like grasping.

Stefan's approach had the PR2 remember different views of an object during a learning phase, and search for appearances of these views during the detection phase.  The learning of these different views of an object can be done either offline using a pan-tilt unit, which rotates the object such that it is visible from all necessary angles, or online when the object is visible for the robot.  The advantage of offline learning is that the environment can be easily controlled and therefore it is easier to select only the useful information in the scene.  By segmenting and storing the 3D point cloud of the object for each of the training views, each detection can be associated with a specific pose.

Stefan used Dominant Orientation Templates (DOT), which allow for an efficient and fast template search.  The high speed for the template search is achieved by discretizing and down-sampling the image data in an intelligent way and making use of the newest processor technology (SSE).

To learn more about Stefan's work, click here.  You can also check out Stefan's presentation slides below or download the slides as PDF.

July 12, 2010

BadmintonPersistent and Persuasive Personal Robots (P^3R): Towards Networked, Mobile, Assistive Robotics

The Persistent and Persuasive Personal Robots (P3R) project is a collaboration between the University of Southern California (USC), NASA’s Jet Propulsion Laboratory (JPL), and the California Science Center (CSC). USC’s three major robotics labs teamed up for this effort: the Interaction Lab, the Robotic Embedded Systems Lab, and the Computational Learning and Motor Control Lab. Our vision is to enable persistent and persuasive personal robots—power-up-and-go robotic systems that operate autonomously for long periods while safely interacting with people. The team’s significant robotics expertise with open-source software development has been validated on various platforms, some mobile, some humanoid, and others a combination of the two. The PR2 will enable an integration of those major lines of work, as well as important new research, and contributions to research and to open source software.

To make robots persistent, the Robotic Embedded Systems Lab is developing software for adaptive sensor self-calibration.  The PR2 robot, for example, carries visual and inertial sensors on-board.  These sensors function as the robot's eyes and inner ears - together, they allow the PR2 to keep track of how fast it's moving, where obstacles are located, etc.  To properly combine measurements, one needs to calibrate the relative positions of the robot's cameras and the IMU.  Without accurate calibration, the sensors can 'disagree' about the motion of the robot and cause navigation problems.  Unfortunately, calibration is typically a time-consuming and labor-intensive job, which needs to be repeated frequently.  The RESL group is building a ROS software package that will allow robots like the PR2 to calibrate themselves, in the background, while performing other tasks.

To make robots personal, the Computational Learning and Motor Control Lab is interested in teaching the PR2 motor skills by imitation learning, and building skill libraries. Reinforcement learning will then be used to refine and build more complex skills. The team's goal is to enable the PR2 to learn new tasks from imitation, and improve the robot's task execution using trial-and-error.  Additionally, the team is interested in learning behaviors that allow for safer and more robust task execution. Along these lines, the team will investigate grasping and manipulation with the PR2’s arms and grippers.  They will also study the interplay of visual perception (attention, object recognition, object localization) and mobile manipulation.  In particular, the group will investigate how perception guides movement and how movement can be used to improve perception.  Furthermore, the team will work on learning task level controllers that also enable the PR2 to accomplish mobile manipulation tasks such as opening a door.  The overarching goal is to develop methods that enable a complex autonomous robotic system such as the PR2 to learn a variety of skills and perform them reliably in complex environments.

Finally, to make robots persuasive, the Interaction Lab is interested in designing adaptive systems that can facilitate free-form interaction and perform robustly in social situations with children as well as adults, able-bodied as well as those with disabilities.  To interact effectively with humans, robots should possess verbal and nonverbal communication skills, such as speech and body language.  To this end, the team will develop mechanisms for synchronized verbal and nonverbal behaviors, and provide parameterized control methods for robot gestures.  Furthermore, using the extensive sensor suite of the PR2, as well as in-house vision-based and wearable motion-tracking systems, they will identify human behaviors, and use them to improve the robot’s interactivity in human-robot interaction studies.

Team collaborators, JPL and CSC, will provide additional activities. Specifically, JPL will focus on robot vision and navigation and the CSC on outreach opportunities that will showcase the PR2 in public events.

Team photoThe Team

The USC PR2 will be shared by the Interaction, Robotic Embedded Systems, and Computational Learning and Motor Control labs.  The principal investigators are:

The team includes an excellent group of graduate students and postdocs, including the following: David Feil-Seifer, Dr. Andrew Howard, Mrinal Kalakrishnan, Jonathan Kelly, Dr. Larry Matthies, Ross Mead, Peter Pastor, Dr. Ludovic Righetti, Aaron St. Clair, and Evangelos Theodorou.

Presentation

Below is a video of USC's team presenting their proposal to the rest of the PR2 Beta Program participants. You can download the slides as PDF.

 

July 8, 2010

VacuumAutonomous Motion Planning for Daily Tasks in Human Environments using Collaborating Robots

For the past 10 years, the JSK lab at the University of Tokyo has used the HRP2 humanoid robot platform to tackle household tasks such as cleaning, retrieving objects, manipulating appliances like ovens and dishwashers, cutting vegetables, and sweeping floors. The lab is also at the center of the Information and Robot Technology (IRT) initiative whose purpose is to solve the challenges facing the rapidly aging Japanese population. One of IRT's goals is to build three personal assistant robot systems: a kitchen assistant robot, a home assistant robot, and a care robot that constantly analyzes people in the environment and gives helpful suggestions or contacts someone for assistance.

The JSK team has worked with Willow Garage since Spring 2009, and has already run several demos on the Willow Garage PR2 robots. Based on this work, the JSK team proposed "Autonomous Motion Planning for Daily Tasks in Human Environments using Collaborating Robots" for the PR2 Beta Program. One unique goal of this project is to complete multi-robot coordination tasks with both HRP2 and the PR2. Interacting in a human-centric environment requires awareness of constantly changing surroundings that include people and other robots. The team will explore interoperability of robots with different kinematics, sensors, and control frameworks. The focus of the research will be on higher-level cognitive modules like manipulation planning, task planning, dynamic movements, error detection, and recovery. The driving principle is to design a system that allows users to easily teach robots to perform new tasks, with little to no parameter tweaking and minimal predetermined, domain-specific information.

JSK will also combine ROS with three other major frameworks: EusLisp, OpenRAVE, and OpenRTM. For almost 20 years, the JSK lab has been developing a robot software framework based on EusLisp, a robotics-oriented dialect of Lisp. The framework's combination with ROS will allow EusLisp-based programs to work in a more distributed, multi-process environment for next generation scripting and task execution.

The team's other project, OpenRAVE, is quickly becoming a popular platform for motion planning algorithm development. OpenRAVE has redefined many traditional beliefs in autonomous manipulation about what is feasible, practical, re-usable, and generalizable to other robots. OpenRAVE was created by Rosen Diankov at CMU, and he will continue his work on the platform at JSK.

Finally, JSK's OpenRTM project is directly funded by the Japanese government and has become the most supported robotics platform in Japan. Much prototype hardware in Japan, including the latest humanoid robots, directly provides drivers through OpenRTM. Eventually, the JSK team hopes to see a specialization of highly reliable modules that occur within EusLisp, OpenRAVE, and OpenRTM, while reusing the distributed environment of ROS.

The team's long-term goal is to build a robot that can help in daily life, both in the office and home. In the office environment, this robot will be capable of retrieving, fetching, carrying, and passing objects across rooms and buildings. In a nursing home context, the robot will be able to use home appliances and other tools to support aging and disabled populations.

Team photoThe Team

The project will involve efforts from the entire JSK laboratory consisting of approximately 30 students and 10 faculty. The project leaders are:

Presentation

Below is a video of Kei Okada presenting JSK's proposal to the rest of the PR2 Beta Program participants. You can download the slides as PDF.

July 7, 2010

With the Beta Program robots on their way out into the world, we've launched the Willow Garage Support Site to provide a single location for PR2 users to get all of their support questions answered.  On the site, you can access resources including the PR2 manual, repair procedures, mailing lists, and maintenance support tickets.

PR2 user or not, you may enjoy watching our memorable Safety Video.

July 6, 2010

Around 5 PM on Fridays, many of us here at Willow Garage start thinking that a cold one would taste pretty good.  However, we often have a few loose ends to tie up before the weekend begins in earnest.  In this situation we've often thought about how perfect it would be to have the robot autonomously deliver beer.  The goal of Willow Garage's third summer hackathon was to make this dream a reality.  The idea of the hackathon is to start hacking Monday morning and demo on Friday afternoon, using all of the existing ROS tools and packages.  Sleep is highly optional. 

For this hackathon, our goal was to make beer fetching as robust and user-friendly as possible.  We also wanted the robot to safely transport the beer to any office, which requires navigation with the arms tucked.  For safe beer transport we designed a bar-keeper add-on to the PR2 base -- a four-holed foam block placed behind the robot's base navigation laser.  Three round holes are for stowing beers during navigation, with the fourth hole storing a convenient beer opener that the robot can pick up.  Equipping our standard fridge with a tilted self-stocking rack meant that the robot could service many user requests without human intervention.    

ScreenshotThe user experience begins with the Beer Me web application.  In this web app, the user is presented with a menu of ice cold beers and ciders, and a pull-down menu specifying the office for delivery.  Once the user hits the enticing Beer Me button, it's the robot's job to make the magic happen.  The robot navigates to the fridge, identifies the door, and performs a handle detection to determine a precise grasping location to use for opening.  The robot then grasps and pulls open the handle, and positions itself between the door and the fridge to make sure the door doesn't close.

The robot uses object recognition to determine which beers are in the rack, and will report back to the app if the user's selection is not available.  Otherwise, it stocks the ordered beers into the foam holder, closes the door, and navigates to the indicated office.  The final piece of the puzzle is the handoff.  We wanted to make very sure the robot didn't commit a party foul and drop beers on the floor, so we added face detection to the handoff behavior.  The robot offers a beer and waits until it detects a face in near proximity.  It then looks at the closest person and will release the gripper when the beer is tugged.  The robot will also offer the bottle opener and wait for it to be returned.  We even got the robot to open beers itself using a standard bottle opener. 

Roboticists get used to hearing: "That's pretty cool, but can it bring me a beer?".  Well now the PR2 can, and it may even open the bottle for you.

RVIZ Beer classification

July 6, 2010

KaramanFrazzoli

A while back, we decided to create an Open-Source Code Award that would go to researchers who release as open-source the code they use in their publication.  We believe that releasing working code with papers will improve scientific practice and accelerate progress in our field.  The goal of this award is to thank those who contribute to the open-source robotics community, and encourage others to follow suit.

At RSS 2010, in Zaragoza, Spain, we announced the first winners of this Open-Source Code Award.  Selected from among the authors of papers published at RSS this year, Sertac Karaman and Emilio Frazzoli won the award for the code accompanying their paper "Incremental Sampling-based Algorithms for Optimal Motion Planning".  The committee selected the authors for releasing their RRT(*) library.  In particular, the committee noted that the code helps the reader understand the paper, and is reusable and extensible.  Additionally, the code does not require any non-open-source software to run.

We look forward to handing out more awards in the future, and hope to see an increase in open-source publications!

Check here to read the winning publication.

July 2, 2010

Dinner TableTidyUpRobot

One of the most intriguing visions in robotics is that of a robotic housemaid capable of helping us with everyday household tasks.  Researchers at Albert-Ludwigs-Universität Freiburg in Germany have now started to develop a robotic application for cleaning up untidy rooms using the PR2.

Imagine you just had a dinner party with your friends.  Everybody has left, and you would head to bed were it not for the mess left behind; dirty plates and half-empty glasses litter the table.  As you begin to clear the dishes, you remember a new personal robot application that recently appeared in the App Store.  A few moments pass and you've found and downloaded the TidyUpRobot Application.  Shortly thereafter, your personal robot enters the room, analyzing the dinner table using its laser scanner and cameras, and begins bringing the glasses and plates to the dishwasher, and the leftover food to the fridge.  Science Fiction?  Albert-Ludwigs-Universität Freiburg will use their PR2 to continue their work towards this goal.

Over the next two years, the PR2 team from the University of Freiburg will work both on the theoretical and practical problems of enabling a household robot to reliably and autonomously clear objects from a table and return them to where they belong.  Such an accomplishment will require progress in a number of robotics research areas including, navigation, perception and manipulation.  Initially, the robot must obtain a map of its environment so that it can navigate from room to room.  The robot must be able to recognize important items, such as the trashcan and dishwasher, and remember their locations.  Additionally, the robot needs to learn how to grasp a wide variety of objects, as different objects require varying grasp positions and handling.  The University of Freiburg aims to piece these, and other, open robotics challenges together, and develop robotic capabilities that will make our lives more efficient and productive.

Team photoThe Team

The TidyUpRobot project is a joint initiative of three internationally-renowned research labs at the University of Freiburg:

The doctoral students and postdocs that will jointly work on the PR2 project are:

Inquiries from students interested in working with the PR2 in Freiburg are always welcome.

Presentation

Below is a video of the Jürgen Sturm presenting University of Freiburg's proposal to the rest of the PR2 Beta Program participants. You can download the slides as PDF.

Article written with assistance from Jürgen Sturm.