Willow Garage Blog
During his sabbatical at Willow Garage, Stéphane Magnenat from the Autonomous Systems Lab, ETH Zurich integrated a new Simultaneous Localization and Mapping (SLAM) solution. SLAM allows a robot to build a map of its environment, and to localize itself in this map. This system is based on a modular ICP algorithm, a collaboration with François Pomerleau and Francis Colas at the Autonomous Systems Lab, ETH Zurich.
The ICP is a classical algorithm to find a transformation between two point clouds, representing the same environment but from different viewpoints. This algorithm forms the basis of most SLAM systems working with laser or depth data. While the classical ICP algorithm is simple, it does not work well in most real 3D environments, and therefore hundreds of variants have been published in the last 20 years. These variants address specific problems, but lack a common comparison ground.
The work of Stéphane and his colleagues provides a framework in which different ICP variants can be tested, combined, and evaluated. The integration with ROS provides a real-time 2D and 3D SLAM system that can fit a large variety of robots and application scenarios without any code change or recompilation.
A paper describing this work will appear in the forthcoming "Autonomous Robots special issue on open source software for robotics research." A dataset paper proposing a variety of environments along with ground-truth poses was recently published in IJRR (link to http://ijr.sagepub.com/content/31/14/1705.abstract) (datasets freely available (link to http://projects.asl.ethz.ch/datasets/doku.php?id=laserregistration:laser...)). The open-source ROS SLAM node is available in the ethzasl_icp_mapping stack (link to http://ros.org/wiki/ethzasl_icp_mapping).
During his recent internship, Rahul Udasi, an undergraduate student at the University of Waterloo created a motor diagnostic tool for the PR2. The tool allows PR2 users to find motor faults quickly by getting diagnostic data from the motors along with analysis of the collected information. The results are presented in graphs and messages that may indicate what might be wrong with the PR2's joint motors, all without having to remove any covers.
The first step in the tool is to select which joint motors the users wants to diagnose. Next, the user moves the joints for the specific motors for about 4 seconds to record enough data then moves onto the next joint. For convenience, the PS3 joystick is used to indicate when the user is done with one joint and when to move to the next. After the data is recorded, the user has access to comprehensive logs and analysis of motor performance. If something looks wrong, the software will tell the user where to look for issues with the joint motor.
By using the tool, users can quickly diagnose motor faults without the need to remove the motors from the assembly, a time-consumering and specialized task.
For more information please visit the PR2 motor diagnostic page.
With respect to the recognition library, Stefan assisted in implementing LINEMOD , a highly efficient template-matching approach for detecting texture-less objects in heavily cluttered scenes. LINEMOD is based on Kinect data. For detecting objects, it uses color gradients computed from image data, as well as surface normals, computed from depth data.
Within the machine learning library, Stefan worked on implementing a flexible and efficient decision tree learning framework. This was applied to obtain keypoint detectors with improved repeatability and efficiency. Additionally, tools were created to evaluate existing detectors and to create the data for learning.
Having a state-of-the-art object detection method in PCL enables efficient detection of texture-less objects with kinect data. Thanks to the new learning framework there is now the ability to create keypoint detectors with improved detection characteristics, which are useful for localization and object detection.
IEEE International Conference on Computer
Willow Garage would like to congratulate the team behind the Open Motion Planning Library (OMPL), which was recently awarded the grand prize from the Open Source Software World Challenge. The OSSWC is the annual competition hosted by the Ministry of Knowledge and Economy of Korea with the goal of promoting open source software and expanding exchanges among open source software developers all over the world. Fifty-five teams from twenty-three countries participated in this year's competition with OMPL coming out on top.
OMPL is developed and maintained by the Physical and Biological Computing Group at Rice University, led by Dr. Lydia Kavraki. The development is coordinated by Dr. Mark Moll, Dr. Lydia Kavraki (Rice) and Dr. Ioan Șucan.
Willow Garage is proud of our role in supporting OMPL, particularly through the contributions of Dr. Ioan Șucan, Dr. Sachin Chitta and Dr. Gil Jones. The award-winning code is based on an initial version of OMPL written by Ioan Șucan while he was an intern at Willow Garage. Sachin, Gil and Ioan presented aspects of OMPL at ROSCon 2012.
During his summer internship at Willow Garage, Julian "Mac" Mason, a Ph.D. candidate from Duke University, worked on the Megaworldmodel: a framework for large-scale, long-term semantic maps. In constrast to occupancy maps (which model free and occupied space), semantic maps model the location of objects, and (when possible) their identities. Determining these identities is difficult: object recognition remains an open problem. For this reason, the Megaworldmodel provides a generic interface to object recognition systems, allowing existing tools to be easily integrated. Two such tools have already been included: Willow Garage's textured_object_detection, and Hilton Bristow's implementation of Deva Ramanan's deformable-parts model.
The Megaworldmodel cleanly encapsulates the capture, processing, and mapping of recognizable objects, and the querying of the resulting map. However, not all objects are recognizable! State-of-the-art object recognition algorithms require extensive supervised training to accurately recognize objects. In large, general environments, manual training is intractable. There are simply too many objects. To enable large-scale semantic mapping, the Megaworldmodel includes tools for active object search (using a Kinect-equipped PR2) and for unsupervised object discovery. While autonomously exploring an environment, the robot will encounter objects (which it cannot yet recognize) from many different viewpoints. Using unsupervised segmentation, these objects can be detected, and then clustered into training examples for existing object recognition techniques. Although this does not provide semantic labels (you get "object 6," not "coffee cup"), it does allow object instances to be recognized in other locations, and at other times. Ongoing work seeks to scale this technique to extremely large datasets, permitting the entirely unsupervised creation of a large object database.
More information about megaworld is available here.
The ICRA 2013 Mobile Manipulation Challenge
We would like to invite participation in the ICRA 2013 Mobile
Manipulation Challenge built around the theme of “Robot’s Kitchen”.
Teams are allowed to use their own robots or the PR2 robots that will be
provided by the organizers. This is the 2nd year that this challenge is
being organized after last year’s “Yesterday’s Sushi” challenge at ICRA
2012 in St. Paul.
Deadline for intent to participate: January 15, 2013
Challenge website: http://
Details about the challenge problem, the prizes on offer, registration and qualification details and the preparatory workshop that will be held at Willow Garage in the beginning of March are available on the challenge website.
Looking forward to seeing you at ICRA 2013 in Karlsruhe!
Hello ROS Community,
ROS Groovy Galapagos
Mass migration of code to GitHub
New Build System - catkin
Removal of Stacks
New Package Release System - bloom
New GUI Tools - rqt
pluginlib and class_loader
Automatic Documentation Jobs
Moving From rosbuild To catkin
Change from Wx to Qt
laser_drivers REP 117 Deprecation Completed
Change Lists of Note
Plans and Special Interest Groups
ROS Enhancement Proposals (REP)
During his internship at Willow Garage, Rob Linsalata from Tufts University worked on running ROS on small, low power ARM-based processors. Rob focused on driving the TurtleBot around using the TurtleCore -- a small embedded computer produced by Gumstix.
Rob worked to integrate the TurtleCore with the TurtleBot throughout his time at Willow Garage. After initially integrating the TurtleCore and TurtleBot he focused on getting ROS running on the ARM processor in the TurtleCore. He has provided documentation and procedures to bring up the TurtleCore on the Overo ARM processor. Rob then helped test and debug ROS functionality running on the ARM architecture.
The TurtleBot comes by default with an Asus netbook. The netbook accounts for a significant fraction of the computation, cost, and power consumption when the TurtleBot is operating. By using a smaller ARM-based processor the TurtleBots can be made to run longer using less expensive parts.
For more details, see ros.org/wiki/TurtleCore
During his internship at Willow Garage, Jonathan Mace from Brown University worked on building an industrial strength successor to rosbridge, a popular ROS package for connecting to ROS from a Web browser. His internship concluded with the release of the rosbridge suite, a robust and extensible collection of packages that facilitate Web-based and non-ROS connection to ROS.
Web browsers are a compelling choice for writing front ends to robot applications. In particular, they offer a ubiquitous, interoperable platform for robot interaction. Given that end users of robot applications may require little or no knowledge of the underlying robot middleware, decoupling a Web-based front end from a ROS-dependent back end is a promising direction to pursue.
In order to facilitate this decoupling, the rosbridge suite provides an access point for Web browsers (and other WebSocket-compatible systems) to access ROS. The rosbridge suite also provides components to automate install and runtime linking of Web components. As such, the rosbridge suite makes it much easier for ROS developers to include a Web component to their work.
The rosbridge suite primarily contains a Web server which runs inside the ROS environment. This Web server listens for incoming WebSocket connections, and exchanges JSON-based messages with connected clients. Clients can instruct rosbridge to call ROS services, subscribe and publish to ROS topics, or introspect the ROS runtime. Response messages originating in the ROS runtime are propagated back to the client. Thus, Web browsers and middleware separate from ROS can still fully interact with a running ROS system.
The structure of the JSON messages exchanged between clients and the rosbridge server are defined in the rosbridge protocol. In order to make the protocol more extensible and pluggable, it was redefined and formally specified. It is similar in spirit to the protocol used by the original rosbridge, but offers more customization.
Other packages in the rosbridge suite include: roswww, an HTTP Web server that runs in the ROS runtime; rosapi, a node that advertises services that introspect the ROS runtime; and tf smart throttle, a node that intelligently throttles tf messages for low-bandwidth connections.
Crossposted from ROS.org
ROS turned five years old in November, so it's time for our sort-of-annual State of ROS. If you recall, we took a deep dive into the growth of ROS in our third-year anniversary post. We won't be as prolific this time around, but suffice it to say that the past two years have built on the excitement, growth, and adoption of ROS.
Numbers don't tell the entire story, but it's a good place to start.
· There are 175 organizations or individuals who have publicly release ROS software in our indexed repositories, up from 50 in 2009 (through October)
· Not counting the approximately 40 PR2s all over the world, there are many hundreds of robots running ROS. We are aware of more than 90 types of robots that are running ROS, up from 50. With 28 robots with supported installation instructions.
· We had 3699 public ROS packages as of April, compared to 1600 three years ago
· ROS continues to have a strong impact in the worldwide academic community, with 439 citations on Google Scholar for the paper: ROS: an open-source Robot Operating System
· There are now people working on ROS on every continent. Africa, South America, and Antarctica are new to the community this time around. Yes, Antarctica.
· You can now buy a book on ROS.
· One, and counting. This is the number of industry conferences dedicated to ROS. More than 200 individuals attended the ROSCon 2012 debut last year in St. Paul, MN. ROSCon 2013 heads to Stuttgart, Germany next year.
· People often ask how many users are there of ROS. Due to the open source nature of ROS, we simply don't know how many ROS users there are in the world. What we can tell you is that the ros.org wiki has had over 55,000 unique visitors in the last month. This doesn't include traffic to our many worldwide mirrors.
The latest version of ROS, Groovy Galapagos, is currently in Beta 1 Release. Groovy will be the sixth full release of ROS. This release is laying the foundations for enabling ROS to continue to grow the number of platforms supported.
Inspired by The Mozilla Foundation, The Apache Software Foundation, and The GNOME Foundation, our three-year anniversary blog post discussed the possibility of a ROS Foundation. In May of this year, Willow Garage announced the debut of the Open Source Robotics Foundation, Inc. OSRF is an independent non-profit organization founded by members of the global robotics community whose mission is to support the development, distribution, and adoption of open source software for use in robotics research, education, and product development.
Because of the BSD license for ROS, we often have no idea who is using ROS in their commercial deployments. We suspect there are a few we are missing, but two major new products were announced this year that are built using ROS. First is Baxter from Rethink Robotics. Baxter was announced just a few months ago and the company has set their sites on manufacturing industries. Check out IEEE Spectrum's article on Rethink here. Also built on ROS is Toyota's Human Support Robot (HSR), which is designed to help those with limited mobility within the home. ROS has even made inroads within the industrial robot world of late, specifically through the ROS-Industrial Consortium.
We can't discuss commercial deployments of ROS without mentioning TurtleBot, originally released in April 2011. Recognizing that not everyone can afford, or even needs, a $280,000 PR2 robot, TurtleBot was brought to market for the express purpose of letting as many people as possible get their hands on ROS. TurtleBot 2.0 was recently featured on Engadget and is now available for pre-order at www.turtlebot.com
At Willow Garage, we often refer to ourselves as a software company disguised as a robot company, and we can point to the ongoing growth of ROS as proof of that assertion. We have also been stating for some time that we need a LAMP stack for robotics. With the latest developments in commercial robots built on ROS, it feels like we are in the beginning stages of that process. We can't predict what ROS will look like in five year, or twenty-five, but if we continue to see the adoption, innovation, and excitement from the ROS community that we have seen in the first five years, then things are certainly looking Rosey.