Willow Garage Blog
Willow Garage has decided to enter the world of commercial opportunities with an eye to becoming a self-sustaining company. This is an important change to our funding model.
The success of the PR2 personal robot and of ROS will continue. There are close to 50 PR2 robots in the world and Willow Garage support of the platform will not diminish. And of course, ROS, as an open source platform, will continue independent of our business model choices. In addition to Willow Garage, its supporters include the Open Source Robotics Foundation and all the other contributors in the ROS community (academic, industrial and individual) who have made it the platform of choice for Robotics.
During his sabbatical at Willow Garage, Stéphane Magnenat from the Autonomous Systems Lab, ETH Zurich integrated a new Simultaneous Localization and Mapping (SLAM) solution. SLAM allows a robot to build a map of its environment, and to localize itself in this map. This system is based on a modular ICP algorithm, a collaboration with François Pomerleau and Francis Colas at the Autonomous Systems Lab, ETH Zurich.
The ICP is a classical algorithm to find a transformation between two point clouds, representing the same environment but from different viewpoints. This algorithm forms the basis of most SLAM systems working with laser or depth data. While the classical ICP algorithm is simple, it does not work well in most real 3D environments, and therefore hundreds of variants have been published in the last 20 years. These variants address specific problems, but lack a common comparison ground.
The work of Stéphane and his colleagues provides a framework in which different ICP variants can be tested, combined, and evaluated. The integration with ROS provides a real-time 2D and 3D SLAM system that can fit a large variety of robots and application scenarios without any code change or recompilation.
A paper describing this work will appear in the forthcoming "Autonomous Robots special issue on open source software for robotics research." A dataset paper proposing a variety of environments along with ground-truth poses was recently published in IJRR (link to http://ijr.sagepub.com/content/31/14/1705.abstract) (datasets freely available (link to http://projects.asl.ethz.ch/datasets/doku.php?id=laserregistration:laser...)). The open-source ROS SLAM node is available in the ethzasl_icp_mapping stack (link to http://ros.org/wiki/ethzasl_icp_mapping).
During his recent internship, Rahul Udasi, an undergraduate student at the University of Waterloo created a motor diagnostic tool for the PR2. The tool allows PR2 users to find motor faults quickly by getting diagnostic data from the motors along with analysis of the collected information. The results are presented in graphs and messages that may indicate what might be wrong with the PR2's joint motors, all without having to remove any covers.
The first step in the tool is to select which joint motors the users wants to diagnose. Next, the user moves the joints for the specific motors for about 4 seconds to record enough data then moves onto the next joint. For convenience, the PS3 joystick is used to indicate when the user is done with one joint and when to move to the next. After the data is recorded, the user has access to comprehensive logs and analysis of motor performance. If something looks wrong, the software will tell the user where to look for issues with the joint motor.
By using the tool, users can quickly diagnose motor faults without the need to remove the motors from the assembly, a time-consumering and specialized task.
For more information please visit the PR2 motor diagnostic page.
With respect to the recognition library, Stefan assisted in implementing LINEMOD , a highly efficient template-matching approach for detecting texture-less objects in heavily cluttered scenes. LINEMOD is based on Kinect data. For detecting objects, it uses color gradients computed from image data, as well as surface normals, computed from depth data.
Within the machine learning library, Stefan worked on implementing a flexible and efficient decision tree learning framework. This was applied to obtain keypoint detectors with improved repeatability and efficiency. Additionally, tools were created to evaluate existing detectors and to create the data for learning.
Having a state-of-the-art object detection method in PCL enables efficient detection of texture-less objects with kinect data. Thanks to the new learning framework there is now the ability to create keypoint detectors with improved detection characteristics, which are useful for localization and object detection.
IEEE International Conference on Computer
Willow Garage would like to congratulate the team behind the Open Motion Planning Library (OMPL), which was recently awarded the grand prize from the Open Source Software World Challenge. The OSSWC is the annual competition hosted by the Ministry of Knowledge and Economy of Korea with the goal of promoting open source software and expanding exchanges among open source software developers all over the world. Fifty-five teams from twenty-three countries participated in this year's competition with OMPL coming out on top.
OMPL is developed and maintained by the Physical and Biological Computing Group at Rice University, led by Dr. Lydia Kavraki. The development is coordinated by Dr. Mark Moll, Dr. Lydia Kavraki (Rice) and Dr. Ioan Șucan.
Willow Garage is proud of our role in supporting OMPL, particularly through the contributions of Dr. Ioan Șucan, Dr. Sachin Chitta and Dr. Gil Jones. The award-winning code is based on an initial version of OMPL written by Ioan Șucan while he was an intern at Willow Garage. Sachin, Gil and Ioan presented aspects of OMPL at ROSCon 2012.
During his summer internship at Willow Garage, Julian "Mac" Mason, a Ph.D. candidate from Duke University, worked on the Megaworldmodel: a framework for large-scale, long-term semantic maps. In constrast to occupancy maps (which model free and occupied space), semantic maps model the location of objects, and (when possible) their identities. Determining these identities is difficult: object recognition remains an open problem. For this reason, the Megaworldmodel provides a generic interface to object recognition systems, allowing existing tools to be easily integrated. Two such tools have already been included: Willow Garage's textured_object_detection, and Hilton Bristow's implementation of Deva Ramanan's deformable-parts model.
The Megaworldmodel cleanly encapsulates the capture, processing, and mapping of recognizable objects, and the querying of the resulting map. However, not all objects are recognizable! State-of-the-art object recognition algorithms require extensive supervised training to accurately recognize objects. In large, general environments, manual training is intractable. There are simply too many objects. To enable large-scale semantic mapping, the Megaworldmodel includes tools for active object search (using a Kinect-equipped PR2) and for unsupervised object discovery. While autonomously exploring an environment, the robot will encounter objects (which it cannot yet recognize) from many different viewpoints. Using unsupervised segmentation, these objects can be detected, and then clustered into training examples for existing object recognition techniques. Although this does not provide semantic labels (you get "object 6," not "coffee cup"), it does allow object instances to be recognized in other locations, and at other times. Ongoing work seeks to scale this technique to extremely large datasets, permitting the entirely unsupervised creation of a large object database.
More information about megaworld is available here.
The ICRA 2013 Mobile Manipulation Challenge
We would like to invite participation in the ICRA 2013 Mobile
Manipulation Challenge built around the theme of “Robot’s Kitchen”.
Teams are allowed to use their own robots or the PR2 robots that will be
provided by the organizers. This is the 2nd year that this challenge is
being organized after last year’s “Yesterday’s Sushi” challenge at ICRA
2012 in St. Paul.
Deadline for intent to participate: January 15, 2013
Challenge website: http://
Details about the challenge problem, the prizes on offer, registration and qualification details and the preparatory workshop that will be held at Willow Garage in the beginning of March are available on the challenge website.
Looking forward to seeing you at ICRA 2013 in Karlsruhe!
Hello ROS Community,
ROS Groovy Galapagos
Mass migration of code to GitHub
New Build System - catkin
Removal of Stacks
New Package Release System - bloom
New GUI Tools - rqt
pluginlib and class_loader
Automatic Documentation Jobs
Moving From rosbuild To catkin
Change from Wx to Qt
laser_drivers REP 117 Deprecation Completed
Change Lists of Note
Plans and Special Interest Groups
ROS Enhancement Proposals (REP)
During his internship at Willow Garage, Rob Linsalata from Tufts University worked on running ROS on small, low power ARM-based processors. Rob focused on driving the TurtleBot around using the TurtleCore -- a small embedded computer produced by Gumstix.
Rob worked to integrate the TurtleCore with the TurtleBot throughout his time at Willow Garage. After initially integrating the TurtleCore and TurtleBot he focused on getting ROS running on the ARM processor in the TurtleCore. He has provided documentation and procedures to bring up the TurtleCore on the Overo ARM processor. Rob then helped test and debug ROS functionality running on the ARM architecture.
The TurtleBot comes by default with an Asus netbook. The netbook accounts for a significant fraction of the computation, cost, and power consumption when the TurtleBot is operating. By using a smaller ARM-based processor the TurtleBots can be made to run longer using less expensive parts.
For more details, see ros.org/wiki/TurtleCore
During his internship at Willow Garage, Jonathan Mace from Brown University worked on building an industrial strength successor to rosbridge, a popular ROS package for connecting to ROS from a Web browser. His internship concluded with the release of the rosbridge suite, a robust and extensible collection of packages that facilitate Web-based and non-ROS connection to ROS.
Web browsers are a compelling choice for writing front ends to robot applications. In particular, they offer a ubiquitous, interoperable platform for robot interaction. Given that end users of robot applications may require little or no knowledge of the underlying robot middleware, decoupling a Web-based front end from a ROS-dependent back end is a promising direction to pursue.
In order to facilitate this decoupling, the rosbridge suite provides an access point for Web browsers (and other WebSocket-compatible systems) to access ROS. The rosbridge suite also provides components to automate install and runtime linking of Web components. As such, the rosbridge suite makes it much easier for ROS developers to include a Web component to their work.
The rosbridge suite primarily contains a Web server which runs inside the ROS environment. This Web server listens for incoming WebSocket connections, and exchanges JSON-based messages with connected clients. Clients can instruct rosbridge to call ROS services, subscribe and publish to ROS topics, or introspect the ROS runtime. Response messages originating in the ROS runtime are propagated back to the client. Thus, Web browsers and middleware separate from ROS can still fully interact with a running ROS system.
The structure of the JSON messages exchanged between clients and the rosbridge server are defined in the rosbridge protocol. In order to make the protocol more extensible and pluggable, it was redefined and formally specified. It is similar in spirit to the protocol used by the original rosbridge, but offers more customization.
Other packages in the rosbridge suite include: roswww, an HTTP Web server that runs in the ROS runtime; rosapi, a node that advertises services that introspect the ROS runtime; and tf smart throttle, a node that intelligently throttles tf messages for low-bandwidth connections.