Willow Garage Blog

January 17, 2014

Clearpath Robotics welcomes the PR2 robot and community to its growing family

(Menlo Park, CA and Kitchener, ON, Canada – January 15, 2014) Willow Garage, the developer of PR2, announces the immediate transfer of support and services responsibilities to Clearpath Robotics, a leader in mobile robotics for research and development. Willow Garage’s development of PR2 along with the Robot Operating System (ROS) has produced the world’s leading mobile manipulation platform for research and development. Willow Garage will continue to sell its remaining stock of PR2 systems while Clearpath Robotics now becomes the sole provider of hardware and software support to current and future PR2 customers. Interest in PR2 systems should continue to be directed to Willow Garage through its portal at www.willowgarage.com, while members of the PR2 community should direct correspondence to www.clearpathrobotics.com.

“Willow Garage is committed to continue to support customers of its PR2 personal robotics platform,” said Scott Hassan, Founder and Chairman, Willow Garage. “I am delighted that Clearpath Robotics will be fulfilling that commitment at least through 2016.“

“The PR2, along with ROS, changed the pace of robotics research and created history,” said Matt Rendall, CEO at Clearpath Robotics. “We’ve been a champion of ROS since the start, so we understand and value the PR2 community and their work. We’re ecstatic to take on service responsibilities for this piece of history, and advance development within the community.”

The PR2 is a compliant mobile manipulation platform built by Willow Garage. Released for production in 2010, the robot’s safe, modular design spurred groundbreaking research in the fields of autonomy, mobile manipulation, and human robot interaction. The standardized platform enables researchers to share their work and leverage the open source software community (ROS); today over 1000 software libraries exist for the 40 PR2’s in use in over a dozen countries.

Clearpath Robotics has been a longstanding partner of Willow Garage as an early adopter of ROS, the first manufacturing partner for Turtlebot, founding sponsors of the annual ROS developers conference, ROSCon, and Clearpath’s CTO, Ryan Gariepy, is a founding board member for the Open Source Robotics Foundation (OSRF).

In order to provide continued customer excellence for PR2 support, Clearpath Robotics is currently hiring Open Source Software Engineers.

About Willow Garage

Willow Garage developed hardware (including PR2) and open source software (including ROS) for personal robotics applications. It was founded on the vision that personal robots is the next paradigm­shifting personal productivity tool. Willow Garage was organized to encourage spin­outs and created eight companies from 2011­13. Among these companies are Suitable Technologies, the leading provider of products that let people be where they need to be regardless of where they are in the world, as well as Redwood Robotics and Industrial Perception Inc. that were recently purchased by Google Inc.

About Clearpath Robotics

Clearpath Robotics, a global leader in unmanned vehicle robotics for research and development, is dedicated to automating the world’s dullest, dirtiest, and deadliest jobs. The Company serves robotics leaders in over 30 countries worldwide in academic, corporate, industrial, and military environments. Recognizing the value of future innovation, Clearpath Robotics established Partnerbot, a grant program to support university robotics research teams, internationally. Clearpath Robotics provides robust robotic vehicles and autonomous solutions that are engineered for performance, designed for customization, and built for open source. Visit Clearpath Robotics at www.clearpathrobotics.com, follow us on Twitter @clearpathrobots or like us on Facebook.

Contacts

WILLOW GARAGE:

Robert Bauer, Ph.D.

Executive Director, Commercialization 650.ILV.RBOT [458.7268] comm@willowgarage.com www.willowgarage.com

Clearpath Robotics:

Meghan Hennessey Marketing Communications 519­513­2416 press@clearpathrobotics.com www.clearpathrobotics.com

August 21, 2013

Palo Alto, Calif., August 21, 2013 - Suitable Technologies, Inc., has retained a majority of employees from Willow Garage, Inc. to increase and enhance the development of Suitable Technologies' BeamTM remote presence system. Suitable Technologies will use the combined resources to further product development, sales and customer support. 

Beam enables users to travel instantly to remote locations using video conferencing technology, over a WiFi or cellular 4G LTE connection, that users can drive. Beam is the market's most effective and reliable solution for remote presence, providing uncompromising quality with a robust offering of features.

Scott Hassan, founder of both Willow Garage and Suitable Technologies, said, "I am excited to bring together the teams of Willow Garage and Suitable Technologies to provide the most advanced remote presence technology to people around the world."

Willow Garage will continue to support customers of its PR2 personal robotics platform and sell its remaining stock of PR2 systems. Interest in PR2 systems or support should continue to be directed to Willow Garage through its portal at www.willowgarage.com.

By increasing resources in research and development, production and customer support, Suitable Technologies is positioned to successfully serve demands for Beam remote presence technology. To learn more about Beam, please visit www.suitabletech.com

About Suitable Technologies

Suitable Technologies develops world-class remote presence technologies. Its first product, Beam, allows people to travel instantly and is designed and manufactured at its headquarters in Palo Alto.

August 20, 2013

The MoveIt! team at Willow Garage has been busy adding a new feature: pick and place with the PR2 robot. A new manipulation tab in the MoveIt! Rviz Plugin allows users to interact directly with the manipulation capabilities in MoveIt! and with the object recognition capabilities provided by the Object Recognition Kitchen (ORK). The plugin also allows users to select object and tables in the scene. Users can plan and execute a pick plan for an object with a single click. MoveIt! will plan grasps using the household objects database. To place an object, just select the table that you want to place it on in the Rviz plugin. MoveIt! will automatically sample a set of poses on the table at the right distance from the edges, determine the right target pose to put the object down, and plan and execute the place action.

For more information please visit moveit.ros.org

August 9, 2013

During his internship at Willow Garage, Pablo Speciale a masters student from Vibot worked on allowing robots to better perceive their environment by improving calibration across multiple RGB cameras. Proper calibration allows robots to accurately interact with their environment by combining measurements from different cameras in an optimization process. 

In this method, calibration is obtained by minimizing the reprojection error with a non-linear solver (in our case, Ceres Solver). The approach starts off with the assumption that one camera is already calibrated with respect to the robot. Next, an initial calibration from the robot model is taken. This solution is able to estimate the best relative position between cameras by taking measurements of 2D patterns, in our case, a checkerboard that’s held by the PR2. By using this process, proper calibration is achieved when all points move as a rigid entity during movement of the robot and its joints.

The goal of this project was the creation of a ROS calibration package for multiple cameras, such as RGB cameras, Microsoft Kinect, and Prosilicas (high-definition cameras). Thanks to Vincent Rabaud and David Fofi for assisting with this project.

Please visit the following links for more information on this project and related past work:

Vibot Master Thesis

Github repository (in development)

Original calibration work: www.ros.org/wiki/calibration

August 1, 2013

During his internship, Russell Toris from Worcester Polytechnic Institute worked on improving the ROS JavaScript libraries to make creating intuitive user interfaces easier for researchers. These improvements have helped to lower the bar for both robotics researchers who want to design web interfaces, and web designers who want to connect their work to robotics.

Using HTML5, JavaScript, and other web technologies has proven to be a great way to expose robotics to a diverse user base across the Internet.  We can use these technologies to allow non-expert users to program robots by creating intuitive, browser-based user interfaces.  As part of the Robot Web Tools effort, Russell has worked on creating high-level libraries designed for easily creating intuitive, browser-based user interfaces.  As a demonstration of the power of these tools, he used the latest software from ROS and the Robot Web Tools projects to develop a user interface that allows users to easily direct the robot to manipulate objects. Users can now drag-and-drop objects in their web browser in order to direct the robot to perform simple pick-and-place tasks. 

By using such an interface, non-expert users can now easily instruct a robot to manipulate the world by simply specifying how they want to world to look. The robot begins by perceiving any known objects on the table, and displays them to the user as 3D meshes. The user is then able to drag-and-drop objects around in order to specify how he or she wants the objects arranged. Users can also specify configurations of objects and save them as templates, as a form of programming by demonstration. Having such an interface available in web browsers means that non-expert users with a wide array of operating systems and browsers can perform manipulation tasks remotely, using their own computers or mobile devices.

This work was done in collaboration with Kaijen Hsiao from Willow Garage, Sarah Osentoski from Bosch, and Chad Jenkins from Brown University. For more information, see ros3djsSharedAutonomyToolkit, and robotwebtools.org.


 

July 15, 2013

During his internship at Willow Garage, Mihai Pomarlan from the Politehnica University of Timisoara spent his time improving the process in which robots move in complex situations, also known as motion planning.

Finding the best motion plan from a variety of options is typically a time-consuming search. Opportunity lies in the optimization of motion planning to speed up the task and some planners attempt to do just that, by keeping a roadmap for the robot. However, if the environment changes, some parts of the roadmap will become unusable.

Checking the entire roadmap against the current environment is an inefficient process. Instead, Mihai employed a heuristic approach which discovers and checks candidates for feasibility. If one aspect of the plan is found invalid, its neighbors have their cost increased and another candidate is selected from the roadmap. If a component of the plan is found to be valid, its neighbors have their cost decreased.

The newly developed planner, called sparse lazy PRM, has been tested against RRTConnect on problems for manipulation. The planner is efficient as well as able to provide good quality paths and the package is freely available online. 

Another instance where a precomputed set of possible behaviors is useful is when the planning problem involves narrow passages and complex spaces. Such is the case when planning for a manipulation task, in which the robot needs to use both arms and change grasps on an object. A simple demo in MoveIt has been coded to showcase this. The robot is tasked to move a ring around a fixed plane. A roadmap planner, similar to SLPRM, is used to plan the movements of the ring. The robot then follows those movements with its arms by inverse kinematics, choosing from a finite set of grasps, as appropriate. 

Although this project is in an early state, it may reveal useful extensions for the OMPL and MoveIt libraries that allow easy definition and reliable solving of complex manipulation tasks.

For more information visit moveit.ros.org and ompl_slprm.

 

June 27, 2013

During his internship at Willow Garage and the Open Source Robotics Foundation, Paul Mathieu from the University of Tokyo has been improving ROS support for ARM platforms with a focused effort on Raspberry Pi. His work makes the installation ROS Groovy a simple task on the pint-sized platform.

Until recently, installing ROS on ARM platforms required building a large quantity of ROS software from source code, a long and tedious task. The lack of easy-to-use cross compilers meant that the software had to be built on the board itself, a time consuming process due to the limited computational power of the Raspberry Pi. Paul's work focused on providing a repository of binary packages for such boards, as well as improvements and extensions to the current build farm's capabilities, allowing for non-x86 binary packages to be easily generated.

The ROS packaging system has been reworked and a new API for the ROS distribution system has been drafted with deep extensibility in-mind. These improvements make building and packaging ROS (and non-ROS) software for PC or embedded targets an easy task, as well as facilitate the replication of build farms.

To install ROS Groovy on Raspberry Pi, please check out the following page here.

June 13, 2013

During his internship at Willow Garage, Alex Ichim, from EPFL Switzerland concentrated his efforts on simplifying the process of using off-the-shelf RGB-D cameras to capture objects and rooms in 3D. In contrast to other proposed systems utilizing low cost sensors, his goal was to leverage the geometric information gathered from the depth camera as much as possible, without the need for RGB cameras to align elements together in space. The result is comparable to more complex state-of-the-art SLAM algorithms that use color features.

To help with the captre of 3D information, Alex and his team present a system that makes use of geometric features such as planar regions. Planes are used for different purposes ranging from noise removal, alignment of pairs of frames, and global error relaxation within the captured information. In addition, much of their effort was spent with enhancing the different stages of point cloud registration by implementing and benchmarking techniques such as filtering, normal computation, correspondence estimation, and filtering. 

At the end, his team refined how the collection of 3D data can be transformed into a compressed representation, such as colorized 3D models. Such a system opens a lot of possibilities given the simplicity of the setup, ranging from scanning small objects such as toys, larger items such as cars, going all the way to reconstructing entire rooms. Once captured, the models can be converted into physical form using off-the-shelf 3D printers.

A thorough evaluation of possible RGB extensions of the application are left for future work. In the meantime, a complete analysis of the components of the system, as well as implementations are available online at www.pointclouds.org.

For more information about Alex's work, check out his thesis (PDF) and presentation (PDF).

 

 

 

June 3, 2013

During his internship at Willow Garage, Scott Niekum from the University of Massachusetts Amherst developed a learning from demonstration system that allows users to show the PR2 how to perform complex, multi-step tasks, which the robot can then generalize to new situations.  Our main test application was the autonomous assembly of simple IKEA furniture.

First, the user provides several kinesthetic demonstrations of the task in various situations, demonstrations in which the user physically moves the arms of the robot to complete the task.  A series of algorithms are then used to discover repeated structure across the demonstraions, resulting in the creation of reusable skills that can be used to reproduce the task.

The robot is then able to sequence these skills in an intelligent, adaptive way by using classifiers learned from the demonstration data.  If the robot happens to make a mistake during execution of the task, the user can stop the robot at any time and provide an interactive correction, showing the robot how to fix the mistake.  This information is then integrated into the robot's knowledge base, so that it can deal with similar situations in the future.

For more informaton see:

Scott Niekum, Sachin Chitta, Andrew Barto, Bhaskara Marthi, Sarah Osentoski,  Incremental Semantically Grounded Learning from Demonstration. Robotics: Science and Systems 9, June 2013. 

For more information please visit:

 

http://www.ros.org/wiki/ar_track_alvar

http://www.ros.org/wiki/dmp

http://www.ros.org/wiki/ml_classifiers

 

May 29, 2013

At Willow Garage, we believe in the power of the web enabling new applications in robotics. The web browser, taking advantage of emerging HTML5 web standards such as WebGL, websockets, and unified video streaming, can be a powerful and versatile frontend for accessing, operating and gathering information from robots. Today we are announcing a set of new Open Source libraries for 3D visualization and interaction, promoting the development of new web-based frontends for ROS systems.

Together, these libraries enable a web-based 3D teleoperation interface for the PR2, as shown in the accompanying video. This interface integrates an interactive robot model with real-time point cloud streaming. It is built using clients for Interactive Markers, TF and point cloud streaming that are part of the ros3d.js Javascript library. The connection between the web browser and ROS is established over websockets using rosbridge and ros.js.

On the robot side, dedicated ROS nodes throttle the transmission of TF information and provide precomputed transforms for the Interactive Markers client. The 3D meshes and textures needed for the robot model are provided by an HTTP file server. In addition, the depth and color information from the Kinect is jointly encoded into a compressed video stream that is provided via HTTP. In order to increase the dynamic range of the streamed depth image, it is split into two individual frames that encode the captured depth information from 0 to 3 meters and from 3 to 6 meters, respectively. Furthermore, compression artifacts are reduced by filling areas of unknown depth information with interpolated sample data. A binary mask is used to detect and omit these samples during decoding. Once this video stream is received by the web browser, it is assigned to a WebGL texture object which allows for fast rendering of the point cloud on the GPU. Here, a vertex shader is used to reassemble the depth and color data followed by generating a colored point cloud. In addition, a filter based on local depth variance is used to further reduce the impact of video compression distortion.  

Using the ros3d.js javascript libraries for Interactive Markers, TF and point cloud streaming, we have created a web-based teleoperation interface for the PR2 that can be run in any modern web browser without the need of installing additional software. The provided toolkit makes it easy for developers to port their existing ROS-based applications to the web. 

For more information, read the tutorials at ros.org and visit robotwebtools.org.