Willow Garage Blog

August 3, 2011

Aitor Aldoma from the Technische Universität Wien (TUW) spent his internship at Willow Garage working on a 3D feature called Clustered Viewpoint Feature Histogram (CVFH) useful for object recognition and pose estimation of rigid objects. The feature was integrated into PCL and the PR2 grasping pipeline to allow grasping of objects in table top scenes (see video).

CVFH is a semi-global feature that can deal with partial occlusions, noise or segmentation artifacts. Moreover, it can be learned on synthetic models of objects represented as CAD models and yet perform very well recognizing objects seen with a depth sensor like the Microsoft Kinect mounted on top of the PR2. Because of the invariance of CVFH about the roll of the camera, the Camera's Roll Histogram (CRH) has been proposed to solve this final degree of freedom to provide a full 6DOF pose of the object in the scene.

Being able to learn on CAD models has several advantages. Besides simplifying the training stage as there is no need for calibrated systems, there are several grasp simulators that given a CAD model of the PR2 gripper and a CAD model of an object, can compute off-line several grasp hypothesis for the object. Once the PR2 recognizes any of this objects on a real scene and their pose is fully estimated, the grasps learned off-line can be used to grasp the real object.

For more technical details and experimental evaluations, please visit http://pointclouds.org.

August 2, 2011

The folks at MIT in the PR2 Beta Program have accomplished their delectable goal of baking cookies from scratch. A little over a month ago, Mario Bollini and Daniela Rus in the Distributed Robotics Lab had "BakeBot" mixing ingredients in a bowl. Now they have it doing the full task of making the dough from scratch and baking it in the oven.

For more information, please see the IEEE Spectrum Automaton article.

July 18, 2011

Bosch is unique among PR2 Beta Program recipients in that it’s not a big university with a well-established robotics research program. Rather, Bosch is best known for making everything from automotive parts to power tools to home appliances, including the washer and dryer that UC Berkeley is using to teach its PR2 to do laundry. So how did Bosch end up as part of the PR2 Beta Program? Engineer Benjamin Pitzer explains that as an industrial research lab, Bosch is able to “offer something that other people can’t.” Instead of just teaching its PR2 to do clever things, Bosch is helping the robotics community work towards the long-term, big picture for commercial household robotics: namely, figuring out what it’s going to take to get a PR2-like robot into your house.

To make this fantasy happen for real, Bosch is working on ways of making robots safer, more capable, and more affordable, and its PR2 Remote Lab tackles all of these issues at the same time. Bosch is building this Remote Lab together with Brown University, which recently purchased a PR2 of its own. The idea behind the Remote Lab is to develop a framework that allows a PR2 to be controlled over the Internet, providing a browser-based infrastructure that includes sensor feedback, 3D models, and camera streams. Before you get any ideas, though, Bosch says that there are safeguards in place to prevent you from remotely driving its PR2 (named Alan) out the door and causing some serious mayhem.

The Remote Lab enables human-in-the-loop control of a robot, where a person can jump in and take over for an otherwise autonomous robot if it encounters a particularly tricky or dangerous task. This makes robots both more capable and cheaper -- instead of having to design a robot that’s 100% autonomous, you can instead build one that’s 90% autonomous (which is much easier to do), and just have a human remotely take care of the other 10% when necessary.

Like UC Berkeley, Bosch has also been teaching its PR2 to fold clothes. Unlike UC Berkeley, Bosch has taken a different approach to folding by cutting out the whole perception problem that Berkeley has been focusing on. Instead, its working on the next step beyond perception: determining good overall policies for folding and unfolding clothes that the robot may not be already familiar with. This is part of the point of the PR2 Beta Program specifically, and ROS in general: different sites can explore different aspects of the same problem, and then combine their progress to solve complex problems (like folding) much, much faster.

Another demonstration that Bosch has been working on is the product of a hackathon, where the entire engineering team focused on one single problem over the course of a week. During this week, they taught their PR2 to use a Dremel tool (made by Bosch, of course) to carve pictures and text into a plank of wood. Bosch has developed a special controller that uses sensor feedback to give the PR2’s arms more precise motions, and it hopes that cheap sensors will enable the next generation of less expensive, more capable robot arms.

Bosch’s ultimate vision is to develop a generalized household robot that’s affordable by just about everyone. Technologies like the Remote Lab and inexpensive sensors will allow for an affordable, capable, Rosie-style household robot, and when you buy one in the not-too-distant future, there’s a good chance it’s going to be from Bosch.

July 13, 2011

We are very excited to announce a new collaboration between Willow Garage, the Healthcare Robotics Lab at Georgia Tech, and Henry and Jane Evans. We call this project Robots for Humanity.

Henry Evans is a mute quadriplegic, having suffered a stroke when he was just 40 years old. Following extensive therapy, Henry regained the ability to move his head and use a finger, which allows him to operate computers. Last year, Henry caught a TV interview of Georgia Tech Professor Charlie Kemp showing research with the Willow Garage PR2 robot. Willow Garage and Professor Kemp were contacted by Henry shortly afterwards, and we have been collaborating since then.

We are currently exploring ways for Henry to use a PR2 robot as his surrogate. Every day, people take for granted the simple act of scratching an itch. In Henry's case, 2-3 times every hour of every day he gets an itch he can't scratch. With the aid of a PR2, Henry was able to scratch an itch for himself for the first time in 10 years.

While this is only a first step, it demonstrates how people with severe physical disabilities could use personal robots to gain independence. In another example, Henry recently used the PR2 to shave his cheek. We are actively researching ways for Henry and others to perform tasks like these on a daily basis.

Currently, Henry uses a head tracker to operate a variety of experimental user interfaces. These interfaces allow him to directly move the robot's body, including its arms and head. They also let him invoke autonomous actions, such as navigating in a room and reaching out to a location.

Robots that complement human abilities are extremely valuable, especially when they help us do things that we can't do by ourselves.  Our goal is to get robots in homes to help people like Henry and Jane Evans.  This is just the beginning.

Robots for Humanity

July 6, 2011

Emma Zhang, from the Robotics Lab at the Rensselaer Polytechnic Institute in New York, recently completed an internship at Willow Garage. Emma worked on an algorithm that attempts to identify object locations suitable for grasping using a parallel gripper, locations which we refer to as ``graspable features''.

Using as input point clouds from a depth camera, this method extracts a gripper-sized voxel grid containing a potential graspable feature, and encoding occupied, empty, and unknown regions of space. The result is then matched against a large set of similar grids obtained from both graspable and non-graspable features computed and labeled using a simulator.

Emma's results show that the outcome of the matching process is a good predictor of the quality of the grasp, as evaluated in simulation. We believe that, by operating directly on real-life sensor data and reasoning about missing information as well as sensed object surfaces, the graspable feature evaluation algorithm has the potential to tackle complex and/or cluttered scenes in the context of both autonomous and human-in-the-loop grasping tasks.

For more details, see Emma's presentation below, or check out the graspable_features package on ROS.org.

June 22, 2011

We're looking foward to seeing you in Los Angeles, USA at RSS 2011 from June 27 - July 1st, 2011! If you're interested in checking out what Willow Garage has been up to lately, come check out our research talks and workshops.

Workshops

Monday, June 27:

Friday, July 1:

Papers/Posters

Monday, June 27:

Thursday, June 30:

Friday, July 1:

June 20, 2011

TurtleBotWe’re happy today to announce the launch of TurtleBot.com, which gives developers new ways to access TurtleBot information and obtain TurtleBot hardware. If you are looking to purchase TurtleBot hardware, you can order parts or assembled kits from our new licensed vendors. We have also posted open-source hardware designs if you want to build your own robot from scratch.

We are excited to announce our three partners who will be producing licensed TurtleBot parts, kits, and fully assembled pre-installed robots. These partners are:

TurtleBot.com provides direct access to purchase parts from these partners.

TurtleBot.com also gives you access to everything you need to produce your own TurtleBot-compatible robot. You can access designs for all the hardware, including the laser cut plates, machined aluminum standoffs, and cable drawings. You can also download full electrical designs for the gyro and power regulator board. All of these documents are released under a FreeBSD Documentation License as part of our TurtleBot Open Source Hardware initiative.

We look forward to working with the TurtleBot community to use this open hardware and software platform to create new applications for robotics.

June 15, 2011

Jeannette Bohg, from the Computer Vision and Active Perception Lab, Centre for Autonomous Systems at the Royal Institute of Technology (KTH) in Stockholm, visited us this spring to work on integrating an object segmentation method into the interactive manipulation framework. Object segmentation is the task of identifying and outlining individual objects in a potentially cluttered scene. One of its applications is in the context of grasping and manipulation, where an object is first segmented and potentially recognized before it is manipulated.

The interactive segmentation is available as a plugin for the rviz simulator. A user can click on objects or select a region around them to provide the segmentation with initial information about the number of objects in the scene, their rough position and appearance. From there, the segmentation is autonomously refined producing individual object point clouds.

"Under the hood" of this user interface is an iterative two-stage algorithm that uses RGB and disparity information. In the first stage it performs pixel-wise labeling based on the current model parameters and then updates these parameters in the second stage. The algorithm can simultaneously optimize the segmentation for background, a supporting surface and a set of foreground objects. A CPU as well as a GPU based implementation are available.

Unlike purely geometric approaches, such as Euclidian segmentation, this method can better handle cluttered scenes and correctly separate objects from supporting planes. The method has also been adapted to cope with incomplete color information and tested using the MS Kinect sensor.

The core segmentation method has been developed by Marten Björkman; for more details see Björkman et al., "Active 3D scene segmentation and detection of unknown objects", ICRA 2010. The GPU and CPU implementations of the segmentation algorithm are available in the active_realtime_segmentation ROS package. The interactive segmentation GUI is available in the object_segmentation_gui package. For more details on these packages, see their documentation pages on ROS.org.

June 14, 2011

The PR2s in the PR2 Beta Program are busy learning how to make you food. To start your day off with a nice Bavarian breakfast, TUM's PR2 and Rosie robots are working together to make you sausages, slice bread, flip pancakes, and set the table. They also have taught PR2 some basic grocery shopping tasks, like putting items into a grocery basket. The work at TUM was done by the CoTeSys (Cognition for Technical Systems) cluster of excellence, which is using these integrated demonstrations to push research in 3D perception, manipulation, and cognitive robotics. You can watch the video below or checkout longer versions of the breakfast and shopping demonstrations. For more information, please see the Intelligent Autonomous Systems Group page.

If sweets are your preference, Mario Bollini at the Distributed Robotics Lab at MIT is working on teaching PR2 to make chocolate chip cookies from scratch. So far it is now pouring and mixing the ingredients together and Mario hopes to soon having it baking cookies.  You can read more at Popsci and IEEE Spectrum.

June 9, 2011

PR2 and ROS are being put to a variety of uses by the GRASP Lab at the University of Pennsylvania. Here are some quick updates.

Menglong Zhu taught PR2 to read signs, which is an important skill for robots navigating in the real world. For more information, see the article on ROS.org.

Safe Interval Path Planning (SIPP) by Mike Phillips and Maxim Likhachev (ICRA 2011 paper) is teaching PR2 better navigation skills around people. Using their planning algorithms, the robot can predict the movement of people and navigate more efficiently around them; you can also use it to get PR2 to play and awesome game of Frogger.

Philliebot threw out the first pitch at a Phillies game. Jordan Brindza and Jamie Gewirtz were nice enough to give ROS a shoutout during their ESPN interview:

Penn's "Aggressive" Quadrotors (Daniel Mellinger, Michael Shomin, Nathan Michael, Vijay Kumar) are now working together in teams to lift and transport objects:

PR2 is helping other robots out by opening doors for them. This was a final project by Ben Charrow for the MEAM 620 robotics course.  Ben's "cooperative robotics" project focused on getting hetergenous robots to work together in order to combine capabilities.