Willow Garage Blog

November 9, 2011

Ze'ev Klapow from Petaluma High School returned to Willow Garage for a second summer internship. He focused on expanding features of dynamic_reconfigure in the navigation stack. dynamic_reconfigure helps you build a GUI to control a moving robot.

Previously, you had to turn off the navigation stack, open a file, change the parameter(s), and then restart it. Now you simply type in the new parameter, and painful processes like tuning planners or updating the robot's footprint on-the-fly are as easy as adjusting a slider or calling a dynamic_reconfigure API. With the updates, which were available in ROS Electric, you can now configure almost all navigation stack components at run-time.

Ze'ev also added a "parameter groups" feature to dynamic_reconfigure, so that you can make manageable groups of long lists of parameters. The groups feature will be available with ROS Fuerte Turtle, coming in March 2012.

See Ze'ev's video for details. To use dynamic_reconfigure in your development, visit the ROS wiki page.

November 8, 2011

CNRS Laboratory of Analysis and Architecture of Systems (LAAS-CNRS) in Toulouse, France owns a PR2 that they have successfully programmed in many human and robot interactions. Recently, LAAS had some fun with the PR2 and staged an original theater performance, titled Roboscopie.

In this stage play, the actor teaches the PR2 to "experience" common everyday items, such as a jacket, shelter, blue bottle, and phone ringing. Get a good laugh when these two exercise together on stage. Watch the short clip above or see the full-length version Roboscopie on YouTube to see how the PR2 and friend take center stage.

Update: for more information, check out the official Roboscopie website.

November 2, 2011

In a little less than three months, Yiping Liu from Ohio State University made a significant update to the camera pose stack by making it possible to calibrate cameras that are connected by moving joints, and storing the result to a URDF file. The camera pose stack captures both the relative poses of the cameras and the state of the joints between the cameras. The optimizer can run over multiple camera poses and multiple joint states.

The goal was to calibrate multiple RGB cameras, such as the Microsoft Kinect, Prosilicas, webcams, and other mounted cameras, on a robot, relative to other cameras. The results are automatically added to the URDF of the robot.

Yiping set up a PR2 with a Kinect camera mounted on its head to demonstrate the calibration between the onboard camera and statically mounted cameras. The PR2 was directed close enough to the statically mounted camera and store the captured checker board pattern. The internal optimizer can produce better results with accumulated measurements.

In the other use case, the PR2 looks at itself in a previously mapped space. Yiping built a simple GUI in cameraposetoolkits for choosing the camera to calibrate. The PR2 moved in front of the selected camera and calibration was performed. The package publishes all calibrated camera frames to TF in realtime. You can watch Yiping’s camera calibration tests on his video.

To use the camera pose toolkits in your work and to find the latest update, check the ROS.org site. Yiping also created tutorials with helpful information about calibrating multiple cameras.

November 1, 2011

Irene Rae from the University of Wisconsin Madison spent the summer of 2011 exploring ways to change the behavior in human robot interaction (HRI). Irene worked with the Texai that reduces the need for remote employees to travel to attend meetings or to spend long hours commuting to the office.

The person working remotely pilots the Texai and works with locals in the office. Locals sometimes treat the Texai as an object and rest their feet on the base, invade the pilot's sense of personal space, block the cameras, or stand where the pilot cannot see them. In other cases, locals ran away when they heard the robot coming or instead put "Kick me" signs on it. This treatment of the robot and pilot are symptomatic of infrahumanization, which is the tendency to treat someone thought of as an out-group member as less human than those viewed as part of the in group.

The study looked at how pilots of the Texai could be treated more like in-group members and humans rather than out-group members. The study tested ways to change this behavior through design or situational framing. To get locals to treat the pilot like part of the group, Irene Rei tested how decorating the Texai with the team's colors improved the pilot’s treatment. In another case, Irene spent time testing how verbally framing the situation with participants improved the HRI.

Increasing in-group feelings between the locals and the pilot can lead to better behavior toward the pilot, and higher levels of cooperation, collaboration, and team efficiency.

October 28, 2011

Jochen Sprickerhof from the Knowledge-Based Systems Group of University of Osnabrück (Germany) visited Willow Garage this summer to do an internship on 3D registration and mapping.

Assembling massive datasets from a large number of individual point clouds is an important part of mobile robotics research. This allows robots to see beyond their immediate surroundings, localize in both 2D and 3D, and share large-scale maps built by other robots. One of the challenges here is how to efficiently estimate and correct the pose error in the trajectory of the robot, without sacrificing accuracy. For example, correcting high-dimensional registration data graphs that represent a large building or a city can take a very long time.

During his internship, Jochen ported his registration framework called ELCH (Explicit Loop Closing Heuristic) into the Point Cloud Library framework. ELCH tries to correct collected sensor data by finding loops in the robot trajectory, estimating the pose error the robot accumulated while driving along the loop, using point cloud registration, and distributing the error over the complete robot pathway. Our hope is that using techniques like ELCH, we will be able to scale the type of environments mobile robots can operate in.

For more details, see Jochen's presentation below (download PDF) as well as the corresponding publication (PDF).

October 17, 2011

IEEE's Automaton blog has a great article on one of the more entertaining demonstrations at the PR2 Workshop at IROS: P.O.O.P. S.C.O.O.P. (Perception Of Offensive Products and Sensorized Control Of Object Pickup). Ben Cohen, Daniel Benamy, Anthony Cowley, Will McMahan, and Joe Romano, all from the GRASP Lab at Penn, outfitted PR2 with a pooper scooper and developed perception, navigation, and manipulation software to reliably detect and cleanup artificial pet messes. The software was able to achieve a reliability rate of 95% and was able to perform a live demonstration for the workshop audience.

IEEE Spectrum has more on this, and you can download the workshop paper here. More information will be posted on the ROS wiki soon.

October 11, 2011

We had a busy and fun time at this year's IROS 2011. Thanks to all of you who stopped by our booth and talks.  A lot of great robotics research was on display and we had a great time at the PR2 Workshop, talks, and interactive presentations.  The exhibition was very exciting and we put together a montage video to celebrate all of the robots in action.  Enjoy!

October 4, 2011

Hungry but too focused on your coding to leave your lab? IEEE Spectrum has posted a video from the University of Tokyo JSK Lab and Technische Universität München that shows PR2 going all the way from the upstairs JSK Lab, down the elevator to a Subway restaurant, and back, all on its own. Instead of having pre-programmed knowledge of where to buy the sandwich, the PR2 is able to do a "semantic search" that makes inferences about what sandwiches are and where they can be purchased in order to complete the task.

This demonstration relies on several new PR2 capabilities, such as manipulating elevator panels and adding multi-floor features to the ROS navigation stack. JSK and TUM also collaborated to make this a great integration of EusLisp, KnowRob, and ROS. We're really excited to see members of the PR2 Beta Program community working together to achieve even more impressive results.

For more information, please see the IEEE Spectrum article.

September 20, 2011

We're looking foward to seeing you in San Francisco, USA at IROS 2011 from September 25 - 30, 2011! If you're interested in checking out what Willow Garage has been up to lately, come check out our research talks and workshops. We'll also be demoing throughout the entire conference, so please come talk to us in the Exhibits hall at both the demonstration sessions and the Willow Garage booth!

Workshops

Sunday, September 25:

Monday, September 26:

Friday, September 30:

Papers/Posters/Panels

Sunday, September 25:

September 19, 2011

PR2 Beta Program

Given how pervasive the PR2 has become in academic institutions around the world, we thought it might be worth checking in with our PR2 community to see what research plans they have in place for the coming year. As always, we're inspired by the innovative research under way but we were frankly surprised by the breadth and ambition of these initiatives. In our goal to catalyze the personal robotics industry, the more R&D underway, the better. The following brief descriptions provide some insight into personal robot applications in the not-too-distant future. These include household tasks such as laundry and clean-up; robot to robot cooperation; navigating within human environments; and even dancing and pet-sitting.

Albert-Ludwigs-Universität Freiburg

In the next year, the team at Freiburg will be continuing the TidyUp project and start with the integration process. Currently, they are working on cleaning up items from tables and bringing them back to where they belong. The robots will wipe the tables and perhaps also other furniture such as shelves. They also plan to learn from human table settings to set the table again for a selected number of people attending a meal.

Bosch Research and Technology Center

During the upcoming year, Bosch plans to continue pursuing both hardware and software developments. Bosch plans to continue development of their proximity sensor for safe teleoperation in dynamic environments. They also plan to create Web interfaces that can be used for multiple tasks as well as for different robots without additional coding. As part of their efforts on shared autonomy, Bosch plans to conduct a user study comparing different manipulation assistance interfaces, as well as release additional packages for shared autonomy task planning. Together with TUM, Bosch will release a pipeline for autonomous semantic mapping.

The George Washington University

Along with newly-recruited faculty Gabe Sibley, Professor Evan Drumwright will be co-teaching a class this Fall on Autonomous Robots using the PR2 as their platform of focus. Students will propose and carry out projects in the class related to a theme. The theme of the projects this semester will be getting the robot to perform tasks to aid in dog-sitting. Pets are an important human companion, we don't like leaving them in a kennel while we are away, and it is hard to find someone you trust to watch your pet at your home while you are away. Also, it's a damn hard thing for a robot to do!

Massachusetts Institute of Technology

During this second year of the beta program, MIT's goal is integration. The key objective is to be able to look for objects that are out of sight, including moving objects out of the way and opening doors.

This will require the team to integrate their hierarchical task-level planner, which plans in belief space, with their state estimation algorithm, visibility modeling, RRT* motion planner and object localization system to demonstrate planning involving information gathering.

Stanford University

In the upcoming academic year, Stanford will use the PR2 to research methods to increase the productivity of robot teleoperators. They will investigate interaction modalities and user interfaces that combine autonomous execution of high-performing subsystems (e.g., robotic navigation) with human supervision of subsystems with lower success rates (e.g., correcting automatically-generated "garbage" or "not garbage" labels of point cloud clusters in a clean-up task). They anticipate that such interfaces will allow temporal, as well as spatial, separation between the teleoperator and the robot, with potential to dramatically increase teleoperator productivity on tasks currently too difficult to fully automate.

The University of California, Berkeley

With very robust results in place for folding of towels and sorting of socks, and promising results for folding of t-shirts, pants and sweaters, UC Berkeley will continue to focus on enabling the PR2 to perform the entire laundry task, from a basket with dirty laundry, to washing, drying, folding or hanging, and putting the articles away. UC Berkeley will also continue to work on (rigid) object instance detection, and investigate push-grasps under uncertainty.

University of Pennsylvania

Researchers at the GRASP Lab at Penn recently added two microphone "ears" to their PR2 and posted his methods on the hardware mods list. They are now working on various ways to use audio input to enable Graspy to do interesting things. One thrust is to adapt work they have been doing for the DARPA ARM-S project to work on the PR2. The team at Penn has written ROAR, the ROS Opensource Audio Recognizer. ROAR enables the user to easily train a one-class SVM to recognize a certain important sound that might intentionally or unintentionally arise during execution of a certain action, such as a handheld drill turning on or an object being knocked over. Penn is also currently working on a demo that will make the PR2 move in interesting ways ("dance") when you play various musical instruments. Second, we are doing work on physical Human Robot Interaction, building on PR2-props code, which enables the PR2 to give high-fives and fist bumps. Other researchers at Penn are both working on new methods for teleoperating mobile manipulator robots. They have code for providing quality vibrotactile feedback from the accelerometer in the robot's gripper, and are looking at various methods of measuring human arm movement and mapping it naturally to the robot.

University of Tokyo

The JSK lab at the University of Tokyo has been using the PR2 robot to buy sandwiches at a local restaurant and deliver documents across offices. The technical issues they have been tackling are inter-floor navigation, on-site action learning, high level task planning and compiling, iPad interfaces, and knowledge database integration. These efforts are getting JSK one step closer to a real robot service application that can be used every day. They have already been teaching a class on ROS, OpenRTM, OpenHRP, and OpenRAVE, which raised a lot of awareness of the PR2 Beta Program throughout the University of Tokyo. In the second semester, the JSK lab will tackle the difficulties in getting the PR2 and a humanoid robot to cooperate together for a household task.

University of Ulster

The Cognitive Robotics Group at Ulster's plans for the coming year are mainly in support of research related to the IM-CLeVer European FP7 project. The acronym stands for Intrinsically Motivated Cumulative Learning Versatile Robots.

More specifically, the IM-CLeVeR project aims at designing robots that cumulatively learn new efficient skills through autonomous development based on intrinsic motivations and reuse such skills for accomplishing multiple, complex, and externally-assigned tasks. In the attached image the robot was engaged in a task of cumulatively learning the appearance of objects placed on a table. In the next term, they plan to move forward in the direction of skills building, by having the PR2 solve complex problems using either skills it is provided with, or new skills that it will learn "on-demand".