Willow Garage Blog

February 7, 2012

roscon19-20 May 2012 (immediately following ICRA)
St. Paul, Minnesota, USA

Important links:

Please join us this May for the inaugural edition of ROSCon!

ROSCon 2012 is a chance for ROS developers of all levels, beginner to expert, to spend an extraordinary weekend learning from and networking with the ROS community. Get tips and tricks from experts, network, and share ideas with fellow developers from around the globe.

ROSCon is a developers' conference, in the model of PyCon and BoostCon. The two-day program will comprise tech talks and tutorials that will introduce you to new tools and libraries, and teach you more about the ones you already know.

We received an overwhelming number of session proposals, which made for some tough decisions in the review process, and an acceptance rate of 27%. The program includes in-depth coverage of fundamentals, like tf and URDF, and introductions to higher-level concepts, like motion planning and multi-robot systems. We'll also hear about interesting applications of ROS, from teaching to field robotics. And we have two excellent keynotes, from Morgan Quigley of Stanford (and original author of ROS) and Julia M. Badger of NASA JSC.

We have some great sponsors to thank: Bosch, Motoman, Clearpath, Heartland, Willow Garage, CoroWare, Schunk, and Yujin. We're excited to have such strong industry support!

Registration is now open at roscon.ros.org.

If you have any questions, send email to info@roscon.ros.org.

January 19, 2012

PR2 and ball

Photo credit: Jessica McConnell Burt/The George Washington University

First there was P.O.O.P. S.C.O.O.P. (Perception Of Offensive Products and Sensorized Control Of Object Pickup) from the folks at Penn's GRASP Lab, and now this. The team at George Washington University have been working on even more tools to improve the clearly vital robo-canine relationship.

We last wrote about PR2 owner GWU Prof. Evan Drumwright shortly after his robot was delivered to the School of Engineering and Applied Sciences.

According to a recent article in George Washington Today, Prof. Drumwright and his team have also been sniffing around robot applications for canines. We encourage you to read New Robot, Old Tricks.

In a course entitled "Autonomous Robotics," Prof. Drumwright worked with students to develop occupational capabilities in the PR2; many of which were pet-related.

GWU senior Sam Zapolsky worked on teaching the PR2 how to walk a dog. Another student, James Taylor worked on a program that would enable the PR2 to play fetch with a dog. Ph.D. student Roxana Leontie created a program that would allow the PR2 to deliver food to a pet.

This in-depth article is well worth reading; not just for the novelty of robots and pets, but also for a deeper understanding of the multidisciplinary nature of personal robotics research and the process by which researchers today are building the robot capabilities of tomorrow.

Congratulations to Prof. Drumwright and his students. You can read the full article here.

January 18, 2012

Tracking 3D objects in continuous point cloud data sequences is an important research topic for mobile robots: it allows robots to monitor the environment and make decisions and adapt their motions according to the changes in the world. An example of such a typical application is visual servoing, with its key challenge to estimate the three dimensional pose of an object in real-time.

During his internship at Willow Garage, Ryohei Ueda from the JSK laboratory at University of Tokyo, worked on a novel 3D tracking library for the Point Cloud Library (PCL) project. The purpose of the library is to provide a comprehensive algorithmic base for the estimation of 3D object poses using Monte Carlo sampling techniques and for calculating the likelihood using combined weighted metrics for hyper-dimensional spaces including Cartesian data, colors, and surface normals. The libpcl_tracking library is optimized to perform computations in real-time, by employing multi CPU cores optimization, adaptive particle filtering (KLD sampling) and other modern techniques.

To find out more about Ryohei's work, please watch the video above. You can also read the slides below (download pdf) for more technical details.

January 16, 2012

Would you like to see a robot give a high five by just asking it to? Brian Thomas, from the Brown University Robotics Group, came up with a way to command robots to give a high five and perform many other actions using spoken or written language. During his internship at Willow Garage, Brian developed a system, called RoboFrameNet, so that robots respond to pre-programmed spoken or written human language actions.

In RoboFrameNet, speech or text input is transferred into robot actions through an intermediary called semantic frames. Semantic frames, developed for the FrameNet Project, describe a scene (or series of actions), such as navigating in a hallway, picking up an object, or following a person. Because RoboFrameNet is expandable, programmers can easily add new actions and processing capabilities to the robots.

RoboFrameNet is integrated into the Android application manager from Willow Garage. RFNServer lets developers integrate their work with RoboFrameNet.

For more details, see the video, and check out the software in the RoboFrameNet package on ros.org.

January 16, 2012

During his internship at Willow Garage, Sebastian Klose, a Ph.D. student from Technology University Munich, focused on integrating visual SLAM with measurements from an inertial measurement unit (IMU). Using a handheld unit with a Microsoft Kinect and an IMU mounted on it, Sebastian wanted to capture 3D maps of the room and objects in it when the camera had noticeable gaps. Visual SLAM usually creates good 3D maps but doesn’t work as well when the camera is pointing at a blank wall or moving too fast.

To bridge the gaps in features, an IMU’s accelerometers and gyroscopes can temporarily track the six-degree (6D) motion of the camera, until new features are visible in the camera image. (Check out the sensors supported by ROS.)

Sebastian used an Extended Kalman Filter (EKF) to track the camera’s 6D pose. The EKF estimates the biases on the IMU measurements based on inputs from the visual SLAM poses. If the camera moves too far from the object, the pose indicates where the camera has moved. Check out the video for details.

The imu_filter stack code is available on ROS.org. This code works with visual SLAM algorithms and can be used with any other input that provides 6D poses.

December 19, 2011

Urban Robotics Inc., a leading provider of three-dimensional (3D) imaging sensors, software and algorithms, today announced it is making its highly scalable, spatially searchable and colorized 3D octree-based point cloud format available to the Point Cloud Library (PCL) community.

PCL is a standalone, large scale, open source project for 3D point cloud processing. Willow Garage launched PCL in March 2011, to help accelerate 3D algorithmic work related to robotic applications. It is free for research and commercial use. The addition of Urban Robotics’ software code to PCL lays the foundation for the creation of a standardized format for large-scale 3D applications.

Urban Robotics developed its octree-based format to efficiently store and manage point cloud data, and to address challenges related to the rapid processing of massive 3D images during daily operations.

"The main challenge with supported LAS and XYZ point cloud file formats is that they do not scale efficiently. Octree data structures provide an elegant way to offer level of detail support that efficiently scales to extremely large datasets," said Geoff Peters, CEO, Urban Robotics.

Octree formats also allow 3D point cloud data to be spatially indexed and queried, as well as provide the ability to encode image color and other metadata.

"Individual companies are challenged by massive point clouds and, as a result, end up developing proprietary data structures to support 3D visualization and manipulation of these large data sets", said Radu B. Rusu, research scientist at Willow Garage and PCL founder. "This fragmented approach to point cloud data is holding back the industry from developing truly revolutionary products."

Urban Robotics will work with Willow Garage to port its octree-based format to the PCL framework in early 2012.

"We hope our contribution to PCL will help establish open standards not just for LIDAR data, but also for the emerging dense and massive 3D datasets extracted from standard imagery," said Peters.

The Point Cloud Library is supported by large numbers of engineers, scientists and organizations around the world. "Urban Robotics' octree-based format is a critical component that will allow innovative companies to focus on 3D product development versus core component development," said Rusu.

For more information regarding the status of the project, please visit the Point Cloud Library (PCL) web site at http://www.pointclouds.org.

December 12, 2011

xkcd fans rejoice, sudo make me a sandwich has become a reality:

As a bonus, PR2 fixes you popcorn while you wait.

This is the latest in food-related acheivements from TUM's Intelligent Autonomous Systems Group, which is part of the PR2 Beta Program. Over the past couple years, they've programmed their robots to learn to make food, including pancakes and Bavarian breakfast.  Instead of programming these tasks directly, their research enables robots to learn how to perform these tasks from instructions on the Internet and other resources.

For more information, please see the IEEE Spectrum/Automaton article.

Previously:

November 24, 2011

Point Cloud Library (PCL) has been recognized with the Grand Prize of the 2011 Open Source Software (OSS) World Challenge (www.ossaward.org). This is the 5th year for the World Challenge, whose purpose is to promote the development of open source software and to expand the interaction between open source communities all over the world. An international committee evaluated a total of 56 open source projects from 22 countries for this year's World Challenge. Their job must have been difficult because of the overall high quality of projects. The winners of the other four prizes were:

  • the Shark Machine Learning Library project (a collaboration between the Institute for Neural Computation at Ruhr University Bochum, nisys Gmbh, the University of Copenhagen, and Honda Research Institute Europe)
  • DIADEM (Oxford University, UK)
  • USM Extract (University of Science, Malaysia)
  • Meego Photo Sharing (Jiao Tong University, Shanghai)

The awards were announced at an inspirational event hosted by the Ministry of Knowledge and Economy in Seoul, South Korea. High ranking officials from the Ministry and other parts of the government participated, gave talks, and handed out the prizes. Deborah Bryant gave the keynote talk on "Open Governments" and the use of open source software for the benefit of the general public. Willow Garage's Radu B. Rusu gave a talk about PCL, the Grand Prize winning project.

November 17, 2011

This past summer at Willow Garage Julian “Mac” Mason, a Ph.D. student from Duke University, worked on semantic world modeling using the PR2 with an attached Microsoft Kinect. Without any pre-existing object models in a database, the PR2 navigated through the hallways and other indoor areas, identifying and mapping objects that are located on flat surfaces, such as tables and countertops.

The PR2 identified common objects like cups, books, printers, robot parts, and houseplants. This method doesn’t require a database of preexisting object models nor the close-range, high-resolution data traditionally used in object recognition and tabletop segmentation. The PR2 stores the Kinect RGB-D point clouds (and tf frames), and then processes the data segmenting out horizontal planes and the objects on those planes. This results in some interesting applications.

  1. Rescanning the area uses an existing database to provide object locations to the navigation system, and the robot explicitly drives to, and directly observes, the location of each potential object. Individual object appearance and disappearance can be tracked over time.
  2. Querying the databases generated by two or more runs to see if a particular object moved.
  3. Relying on the perceptual data associated with each object allows queries like "show me small, curved, white objects in the cafeteria" (a good substitute for "coffee cups").

The related software is available in the semanticmodel package on ROS.org. While this software was built and tested on a PR2, it requires only a localized mobile base and Microsoft Kinect. For more technical information, please see these presentation slides from IROS 2011.

November 10, 2011

Tony Pratkanis, a Bay Area high school student, has spent the past several summers at Willow Garage developing various capabilities for robotics.  This past summer, he worked on deploying an Android applications platform for ROS-enabled robots. Tony developed a GUI interface, for example, that runs on an Android operating system, and lets users launch robot demos and utilities from the tablet, not the robot’s command-line interface. In the past, users had to set up a computer, and then log into the robot, such as the PR2, and enter commands at a command-line interface to manage ROS applications.

With the applications platform, a user selects the robot in the GUI, and then the application to run. Users can also install or manage other applications. This provides a standard for launching demos and utilities on the robot side, as well as on the client interface. In some cases, the GUI supports different types of robot, such as the PR2 and the TurtleBot.

View the video to see how Tony managed the PR2 from a tablet, and used other features of the applications platform.

We hope that the applications platform makes it easier to share demos and utilities with ROS robots. For more info, visit the ROS wiki page.