Willow Garage Blog
Crossposted from ROS.org
ROS turned five years old in November, so it's time for our sort-of-annual State of ROS. If you recall, we took a deep dive into the growth of ROS in our third-year anniversary post. We won't be as prolific this time around, but suffice it to say that the past two years have built on the excitement, growth, and adoption of ROS.
Numbers don't tell the entire story, but it's a good place to start.
· There are 175 organizations or individuals who have publicly release ROS software in our indexed repositories, up from 50 in 2009 (through October)
· Not counting the approximately 40 PR2s all over the world, there are many hundreds of robots running ROS. We are aware of more than 90 types of robots that are running ROS, up from 50. With 28 robots with supported installation instructions.
· We had 3699 public ROS packages as of April, compared to 1600 three years ago
· ROS continues to have a strong impact in the worldwide academic community, with 439 citations on Google Scholar for the paper: ROS: an open-source Robot Operating System
· There are now people working on ROS on every continent. Africa, South America, and Antarctica are new to the community this time around. Yes, Antarctica.
· You can now buy a book on ROS.
· One, and counting. This is the number of industry conferences dedicated to ROS. More than 200 individuals attended the ROSCon 2012 debut last year in St. Paul, MN. ROSCon 2013 heads to Stuttgart, Germany next year.
· People often ask how many users are there of ROS. Due to the open source nature of ROS, we simply don't know how many ROS users there are in the world. What we can tell you is that the ros.org wiki has had over 55,000 unique visitors in the last month. This doesn't include traffic to our many worldwide mirrors.
The latest version of ROS, Groovy Galapagos, is currently in Beta 1 Release. Groovy will be the sixth full release of ROS. This release is laying the foundations for enabling ROS to continue to grow the number of platforms supported.
Inspired by The Mozilla Foundation, The Apache Software Foundation, and The GNOME Foundation, our three-year anniversary blog post discussed the possibility of a ROS Foundation. In May of this year, Willow Garage announced the debut of the Open Source Robotics Foundation, Inc. OSRF is an independent non-profit organization founded by members of the global robotics community whose mission is to support the development, distribution, and adoption of open source software for use in robotics research, education, and product development.
Because of the BSD license for ROS, we often have no idea who is using ROS in their commercial deployments. We suspect there are a few we are missing, but two major new products were announced this year that are built using ROS. First is Baxter from Rethink Robotics. Baxter was announced just a few months ago and the company has set their sites on manufacturing industries. Check out IEEE Spectrum's article on Rethink here. Also built on ROS is Toyota's Human Support Robot (HSR), which is designed to help those with limited mobility within the home. ROS has even made inroads within the industrial robot world of late, specifically through the ROS-Industrial Consortium.
We can't discuss commercial deployments of ROS without mentioning TurtleBot, originally released in April 2011. Recognizing that not everyone can afford, or even needs, a $280,000 PR2 robot, TurtleBot was brought to market for the express purpose of letting as many people as possible get their hands on ROS. TurtleBot 2.0 was recently featured on Engadget and is now available for pre-order at www.turtlebot.com
At Willow Garage, we often refer to ourselves as a software company disguised as a robot company, and we can point to the ongoing growth of ROS as proof of that assertion. We have also been stating for some time that we need a LAMP stack for robotics. With the latest developments in commercial robots built on ROS, it feels like we are in the beginning stages of that process. We can't predict what ROS will look like in five year, or twenty-five, but if we continue to see the adoption, innovation, and excitement from the ROS community that we have seen in the first five years, then things are certainly looking Rosey.
During his internship at Willow Garage, University at Southern California student Jon Binney worked on improving the way robots grasp. He taught them how to improve their knowledge of where and what an object is by feeling for it with their grippers.
Robot grasping is often poorly coupled with object recognition. Typically, an object recognition algorithm is run to find the "most likely" object identity (shape) and pose, then it must execute a grasp motion relative to that pose. This approach is unstable because the pose provided by object recognition often has some amount of error (which is further exacerbated by imperfect calibration between the robot's cameras and grippers), which can cause the grasp to fail, and also because object recognition sometimes misidentifies the object entirely.
To overcome this, Jon implemented a system that uses the pose provided by object recognition as the center of a distribution of possible object poses and also considers other hypotheses for the object's shape. As the robot attempts to grasp the object, it uses measurements from tactile sensors on the gripper as well as the robot's ability to sense objects, to update the distribution over object poses and shapes. This includes information about where the object is and where the object is not through tactile feeling. If at any point the current grasp being executed seems unlikely to work (based on the updated object pose distribution), the robot backs off and picks a new grasp to try.
Code created for the project can be downloaded here.
Each year, the ACM/IEEE Human-Robot Interaction conference program committee gathers to decide which papers will be published and presented at the annual conference. This year, we were happy to host the program committee meeting at Willow Garage. In a 1.5 day gathering, we reviewed the papers, reviewers' comments, rebuttals, and discussions, to figure out how to construct this year's conference program and how to provide helpful feedback to the authors.
The program co-chairs, Jodi Forlizzi (Carnegie Mellon University) and Michita Imai (Keio University) pulled together a great group of people for the program committee and kept us on target. General co-chairs Hideaki Kuzuoka (University of Tsukuba) and Vanessa Evers (University of Twente) joined us for the program committee meeting, too. Hideaki joined in person and Vanessa Beamed in to Willow Garage on Friday to visit with everyone, which was fun.
We are very excited about the line-up of papers that will be presented at HRI 2013 in Tokyo, Japan. Hope to see you there!
Mobile manipulators like the PR2 have the physical capability to do a range of useful tasks for humans, however their actual capabilities are limited by the software applications written by highly specialized programmers. Instead, Maya Cakmak from Georgia Tech, envisions robots that can be programmed by their end-users for their own specific needs. This past summer, Maya worked on developing a spoken dialog interface that allows a user to program new skills by physically moving PR2’s two arms and using simple speech commands.
Imagine purchasing a brand new “programmable” robot. How would you know what to do, to make the robot do something? This is not a problem for many of our daily appliances as they considerably limit possible user actions. For functionality like robot programming with a verbal dialog interface, however, it is important to guide the user with appropriate feedback from the robot and provide supplementary materials such as user manuals, tutorials, or instructional videos. Maya conducted a user study that replicates the described scenario. Participants in this study (15 men and 15 women, ages 19-70) with no prior knowledge of how to program the robot were left alone with the robot and a combination of supplementary materials. They had to figure out on their own how to program different skills such as picking up medicine from a cabinet or folding a towel.
The study revealed that information presented in the user manual easily gets overlooked and instructional videos are most useful in jump starting the interaction. In addition, trial-and-error plays a crucial role especially for achieving a certain proficiency level.
User studies like Maya’s, provide important insights into how the interface and the supplementary material should be designed to improve the learnability of end-user programmable robots. Check out the video for sample interactions and look for Maya and Leila Takayama’s upcoming publication for more details.
During his summer internship, Jeff Hawke from Georgia Tech worked with us on modeling, controlling and characterizing a novel robotic gripper.
The new gripper, named Velo 2G, is underactuated, using a single active flexor tendon to perform both fingertip and enveloping grasps. To integrate the Velo with the PR2 robot, the first task was to create a Universal Robot Description File (or URDF) containing an analytical model of its actuation mechanism. The second step was to write a controller able to command the gripper to a desired finger gap, while limiting the applied grip force to a specified value. Jeff also studied the relationship between applied motor torque and resulting grip force, and wrote a more advanced controller able to sustain large grip forces for prolonged periods of time while applying low current to the actuator.
We are now testing the Velo 2G on the PR2 robot, performing grasping and manipulation tasks on a wide range of objects, both autonomously and under tele-operation. For more details, see the Velo 2G webpage at www.willowgarage.com/velo2g
During his summer internship at Willow Garage, Hilton Bristow, a PhD. student from the Queensland University of Technology, Australia, implemented a deformable parts-based object recognition method. There are many perception situations when only monocular (single camera) visual data is available, and in such situations, robust, efficient object detection techniques are desired.
Object recognition using mixtures of deformable parts is a state-of-the-art technique for monocular object recognition. Hilton ported an existing method by Deva Ramanan from Matlab to C++ to improve the computational performance and make it more accessible to the computer vision and robotics communities alike. In doing so, he recognized that depth data (such as with the kinnect sensor) could be leveraged to prune the object search space, and disambiguate multiple superimposed object candidates. The result was an object detection framework capable of detecting human bodies at 1-2 frames per second (fps) and simpler objects at 5-10 fps.
Give the code a shot! You can find it in the wg-perception repository on GitHub, along with a number of pre-trained models and bindings to ROS and the ECTO synchronous vision pipeline. For more information check out the video.
During his second internship at Willow Garage, Aaron Blasdel from the University of Tokyo worked on a graphical user interface (GUI) for ROS utilizing the Qt framework. ROS GUI, a system that allows users to interact and introspect the ROS environment in a visual manner. Users are provided with tools and encouraged to develop and contribute tools of their own to the ROS GUI ecosystem.
The ROS GUI is designed as a plugin architecture which allows users to quickly implement Qt-based GUIs plugins for use with ROS. The framework provides an automatic save/restore system for the currently loaded plugins and the position/size of their windows that load at plugin start. Each plugin can contribute additional intrinsic state information which then persists.
Based on the ROS GUI framework, Aaron developed four tools to improve the debugging lives of ROS Users. The first two plugins rqt_console and rqt_logger_level are closely coupled and they provide a graphical interface to capture any broadcasted log message and filter them. Furthermore, the plugins can be used to suppress log messages before they are sent to keep the bandwidth low.
The third tool, rqt_bag, enables the user to introspect the content of ROS bag files, either with a text-based visualization or with a message specific view, for example viewing images. Additionally, it provides recording and playback functionalities while passing the messages to other tools for visualization (i.e. rqt_plot for 2D plotting of numeric values). A basic API is provided to integrate custom visualizers.
Finally, the fourth tool rqt_web integrates Web-based tools into ROS GUI. It enables using these different technologies in an integrated user interface.
The four plugins developed in this project enable users to easily introspect and debug their ROS applications in a graphical manner.
For information on how to contribute a plugin or install these tools please go to: http://ros.org/wiki/rqt
Stephen Brawner is a PhD student at Brown University. During his recent internship at Willow Garage Stephen developed a SolidWorks to URDF exporter. This exporter will help robot developers integrate their designs with ROS.
This tool is a simple add-in that exports single parts or whole assemblies to a URDF package. The add-in displays a simple GUI that automatically pulls all the information from a model and organizes the SolidWorks assembly tree into a URDF robot tree.
The add-in has a separate GUI for exporting single parts into standalone links or exporting assemblies into complicated tree of links and joints.
For exporting parts, the tool presents a single window which summarizes all the information it grabbed from SolidWorks. For assemblies, the user is presented with a Property Manager page to configure the URDF which can be saved with the design. After configuration, the tool will analyze the free degrees of freedom between the two components in the SolidWorks assembly to infer the joint type, origin and axis. The user can also customize many attributes of the URDF to their liking within the add-in.
This SolidWorks to URDF export tool should help ROS robot developers integrate their designs much faster and much easier with ROS.
Instructions for installation and use can be found at ros.org/wiki/sw_urdf_exporter
Gary Bradski, Founder and CTO at Industrial Perception, Inc. and Founder and Director of the Open Source Computer Vision Library (OpenCV), along with Vincent Rabaud, Research Engineer at Willow Garage, are leading a workshop entitled "Open Source Computer Vision and Robotics."
The workshop will take place on Monday, October 15, 2012 at the Sapienza University of Rome from 9:15 a.m. to 1:30 pm
Details and complimentary registration is at http://visionrobotics.eventbrite.it/
As part of our research and development work at Willow Garage, we’ve recently been exploring new gripper designs and grasping strategies. We’d like to share some early results involving a new gripper that we’ve recently developed and tested. While parallel grasps are effective on a wide range of objects and tasks, adding the ability to envelop objects can greatly increase the stability of the grasp in many situations. We explored the design space aiming to achieve both of these capabilities.