Willow Garage Blog
Palo Alto, Calif., August 21, 2013 - Suitable Technologies, Inc., has retained a majority of employees from Willow Garage, Inc. to increase and enhance the development of Suitable Technologies' BeamTM remote presence system. Suitable Technologies will use the combined resources to further product development, sales and customer support.
Beam enables users to travel instantly to remote locations using video conferencing technology, over a WiFi or cellular 4G LTE connection, that users can drive. Beam is the market's most effective and reliable solution for remote presence, providing uncompromising quality with a robust offering of features.
Scott Hassan, founder of both Willow Garage and Suitable Technologies, said, "I am excited to bring together the teams of Willow Garage and Suitable Technologies to provide the most advanced remote presence technology to people around the world."
Willow Garage will continue to support customers of its PR2 personal robotics platform and sell its remaining stock of PR2 systems. Interest in PR2 systems or support should continue to be directed to Willow Garage through its portal at www.willowgarage.com.
By increasing resources in research and development, production and customer support, Suitable Technologies is positioned to successfully serve demands for Beam remote presence technology. To learn more about Beam, please visit www.suitabletech.com.
About Suitable Technologies
Suitable Technologies develops world-class remote presence technologies. Its first product, Beam, allows people to travel instantly and is designed and manufactured at its headquarters in Palo Alto.
The MoveIt! team at Willow Garage has been busy adding a new feature: pick and place with the PR2 robot. A new manipulation tab in the MoveIt! Rviz Plugin allows users to interact directly with the manipulation capabilities in MoveIt! and with the object recognition capabilities provided by the Object Recognition Kitchen (ORK). The plugin also allows users to select object and tables in the scene. Users can plan and execute a pick plan for an object with a single click. MoveIt! will plan grasps using the household objects database. To place an object, just select the table that you want to place it on in the Rviz plugin. MoveIt! will automatically sample a set of poses on the table at the right distance from the edges, determine the right target pose to put the object down, and plan and execute the place action.
For more information please visit moveit.ros.org
During his internship at Willow Garage, Pablo Speciale a masters student from Vibot worked on allowing robots to better perceive their environment by improving calibration across multiple RGB cameras. Proper calibration allows robots to accurately interact with their environment by combining measurements from different cameras in an optimization process.
In this method, calibration is obtained by minimizing the reprojection error with a non-linear solver (in our case, Ceres Solver). The approach starts off with the assumption that one camera is already calibrated with respect to the robot. Next, an initial calibration from the robot model is taken. This solution is able to estimate the best relative position between cameras by taking measurements of 2D patterns, in our case, a checkerboard that’s held by the PR2. By using this process, proper calibration is achieved when all points move as a rigid entity during movement of the robot and its joints.
The goal of this project was the creation of a ROS calibration package for multiple cameras, such as RGB cameras, Microsoft Kinect, and Prosilicas (high-definition cameras). Thanks to Vincent Rabaud and David Fofi for assisting with this project.
Please visit the following links for more information on this project and related past work:
Github repository (in development)
Original calibration work: www.ros.org/wiki/calibration
By using such an interface, non-expert users can now easily instruct a robot to manipulate the world by simply specifying how they want to world to look. The robot begins by perceiving any known objects on the table, and displays them to the user as 3D meshes. The user is then able to drag-and-drop objects around in order to specify how he or she wants the objects arranged. Users can also specify configurations of objects and save them as templates, as a form of programming by demonstration. Having such an interface available in web browsers means that non-expert users with a wide array of operating systems and browsers can perform manipulation tasks remotely, using their own computers or mobile devices.
This work was done in collaboration with Kaijen Hsiao from Willow Garage, Sarah Osentoski from Bosch, and Chad Jenkins from Brown University. For more information, see ros3djs, SharedAutonomyToolkit, and robotwebtools.org.
During his internship at Willow Garage, Mihai Pomarlan from the Politehnica University of Timisoara spent his time improving the process in which robots move in complex situations, also known as motion planning.
Finding the best motion plan from a variety of options is typically a time-consuming search. Opportunity lies in the optimization of motion planning to speed up the task and some planners attempt to do just that, by keeping a roadmap for the robot. However, if the environment changes, some parts of the roadmap will become unusable.
Checking the entire roadmap against the current environment is an inefficient process. Instead, Mihai employed a heuristic approach which discovers and checks candidates for feasibility. If one aspect of the plan is found invalid, its neighbors have their cost increased and another candidate is selected from the roadmap. If a component of the plan is found to be valid, its neighbors have their cost decreased.
The newly developed planner, called sparse lazy PRM, has been tested against RRTConnect on problems for manipulation. The planner is efficient as well as able to provide good quality paths and the package is freely available online.
Another instance where a precomputed set of possible behaviors is useful is when the planning problem involves narrow passages and complex spaces. Such is the case when planning for a manipulation task, in which the robot needs to use both arms and change grasps on an object. A simple demo in MoveIt has been coded to showcase this. The robot is tasked to move a ring around a fixed plane. A roadmap planner, similar to SLPRM, is used to plan the movements of the ring. The robot then follows those movements with its arms by inverse kinematics, choosing from a finite set of grasps, as appropriate.
Although this project is in an early state, it may reveal useful extensions for the OMPL and MoveIt libraries that allow easy definition and reliable solving of complex manipulation tasks.
During his internship at Willow Garage and the Open Source Robotics Foundation, Paul Mathieu from the University of Tokyo has been improving ROS support for ARM platforms with a focused effort on Raspberry Pi. His work makes the installation ROS Groovy a simple task on the pint-sized platform.
Until recently, installing ROS on ARM platforms required building a large quantity of ROS software from source code, a long and tedious task. The lack of easy-to-use cross compilers meant that the software had to be built on the board itself, a time consuming process due to the limited computational power of the Raspberry Pi. Paul's work focused on providing a repository of binary packages for such boards, as well as improvements and extensions to the current build farm's capabilities, allowing for non-x86 binary packages to be easily generated.
The ROS packaging system has been reworked and a new API for the ROS distribution system has been drafted with deep extensibility in-mind. These improvements make building and packaging ROS (and non-ROS) software for PC or embedded targets an easy task, as well as facilitate the replication of build farms.
To install ROS Groovy on Raspberry Pi, please check out the following page here.
During his internship at Willow Garage, Alex Ichim, from EPFL Switzerland concentrated his efforts on simplifying the process of using off-the-shelf RGB-D cameras to capture objects and rooms in 3D. In contrast to other proposed systems utilizing low cost sensors, his goal was to leverage the geometric information gathered from the depth camera as much as possible, without the need for RGB cameras to align elements together in space. The result is comparable to more complex state-of-the-art SLAM algorithms that use color features.
To help with the captre of 3D information, Alex and his team present a system that makes use of geometric features such as planar regions. Planes are used for different purposes ranging from noise removal, alignment of pairs of frames, and global error relaxation within the captured information. In addition, much of their effort was spent with enhancing the different stages of point cloud registration by implementing and benchmarking techniques such as filtering, normal computation, correspondence estimation, and filtering.
At the end, his team refined how the collection of 3D data can be transformed into a compressed representation, such as colorized 3D models. Such a system opens a lot of possibilities given the simplicity of the setup, ranging from scanning small objects such as toys, larger items such as cars, going all the way to reconstructing entire rooms. Once captured, the models can be converted into physical form using off-the-shelf 3D printers.
A thorough evaluation of possible RGB extensions of the application are left for future work. In the meantime, a complete analysis of the components of the system, as well as implementations are available online at www.pointclouds.org.
During his internship at Willow Garage, Scott Niekum from the University of Massachusetts Amherst developed a learning from demonstration system that allows users to show the PR2 how to perform complex, multi-step tasks, which the robot can then generalize to new situations. Our main test application was the autonomous assembly of simple IKEA furniture.
First, the user provides several kinesthetic demonstrations of the task in various situations, demonstrations in which the user physically moves the arms of the robot to complete the task. A series of algorithms are then used to discover repeated structure across the demonstraions, resulting in the creation of reusable skills that can be used to reproduce the task.
The robot is then able to sequence these skills in an intelligent, adaptive way by using classifiers learned from the demonstration data. If the robot happens to make a mistake during execution of the task, the user can stop the robot at any time and provide an interactive correction, showing the robot how to fix the mistake. This information is then integrated into the robot's knowledge base, so that it can deal with similar situations in the future.
For more informaton see:
Scott Niekum, Sachin Chitta, Andrew Barto, Bhaskara Marthi, Sarah Osentoski, Incremental Semantically Grounded Learning from Demonstration. Robotics: Science and Systems 9, June 2013.
For more information please visit:
At Willow Garage, we believe in the power of the web enabling new applications in robotics. The web browser, taking advantage of emerging HTML5 web standards such as WebGL, websockets, and unified video streaming, can be a powerful and versatile frontend for accessing, operating and gathering information from robots. Today we are announcing a set of new Open Source libraries for 3D visualization and interaction, promoting the development of new web-based frontends for ROS systems.
On the robot side, dedicated ROS nodes throttle the transmission of TF information and provide precomputed transforms for the Interactive Markers client. The 3D meshes and textures needed for the robot model are provided by an HTTP file server. In addition, the depth and color information from the Kinect is jointly encoded into a compressed video stream that is provided via HTTP. In order to increase the dynamic range of the streamed depth image, it is split into two individual frames that encode the captured depth information from 0 to 3 meters and from 3 to 6 meters, respectively. Furthermore, compression artifacts are reduced by filling areas of unknown depth information with interpolated sample data. A binary mask is used to detect and omit these samples during decoding. Once this video stream is received by the web browser, it is assigned to a WebGL texture object which allows for fast rendering of the point cloud on the GPU. Here, a vertex shader is used to reassemble the depth and color data followed by generating a colored point cloud. In addition, a filter based on local depth variance is used to further reduce the impact of video compression distortion.
At ROSCON 2013, we integrated MoveIt! with the Baxter Research Robot from Rethink Robotics. Baxter has a sonar array providing some 3D information about the environment, but this information was not integrated with MoveIt! The generated motions from MoveIt! still avoid self-collisions (including collisions between the two arms). We would like to thank Rethink Robotics for allowing us to integrate with their robot, especially Joe Romano, Albert Huang, Christopher Gindel and Matthew Williamson for helping with the actual integration.
For more information about MoveIt!: see moveit.ros.org