Willow Garage Blog
During his internship at Willow Garage, Alex Ichim, from EPFL Switzerland concentrated his efforts on simplifying the process of using off-the-shelf RGB-D cameras to capture objects and rooms in 3D. In contrast to other proposed systems utilizing low cost sensors, his goal was to leverage the geometric information gathered from the depth camera as much as possible, without the need for RGB cameras to align elements together in space. The result is comparable to more complex state-of-the-art SLAM algorithms that use color features.
To help with the captre of 3D information, Alex and his team present a system that makes use of geometric features such as planar regions. Planes are used for different purposes ranging from noise removal, alignment of pairs of frames, and global error relaxation within the captured information. In addition, much of their effort was spent with enhancing the different stages of point cloud registration by implementing and benchmarking techniques such as filtering, normal computation, correspondence estimation, and filtering.
At the end, his team refined how the collection of 3D data can be transformed into a compressed representation, such as colorized 3D models. Such a system opens a lot of possibilities given the simplicity of the setup, ranging from scanning small objects such as toys, larger items such as cars, going all the way to reconstructing entire rooms. Once captured, the models can be converted into physical form using off-the-shelf 3D printers.
A thorough evaluation of possible RGB extensions of the application are left for future work. In the meantime, a complete analysis of the components of the system, as well as implementations are available online at www.pointclouds.org.
During his internship at Willow Garage, Scott Niekum from the University of Massachusetts Amherst developed a learning from demonstration system that allows users to show the PR2 how to perform complex, multi-step tasks, which the robot can then generalize to new situations. Our main test application was the autonomous assembly of simple IKEA furniture.
First, the user provides several kinesthetic demonstrations of the task in various situations, demonstrations in which the user physically moves the arms of the robot to complete the task. A series of algorithms are then used to discover repeated structure across the demonstraions, resulting in the creation of reusable skills that can be used to reproduce the task.
The robot is then able to sequence these skills in an intelligent, adaptive way by using classifiers learned from the demonstration data. If the robot happens to make a mistake during execution of the task, the user can stop the robot at any time and provide an interactive correction, showing the robot how to fix the mistake. This information is then integrated into the robot's knowledge base, so that it can deal with similar situations in the future.
For more informaton see:
Scott Niekum, Sachin Chitta, Andrew Barto, Bhaskara Marthi, Sarah Osentoski, Incremental Semantically Grounded Learning from Demonstration. Robotics: Science and Systems 9, June 2013.
For more information please visit:
At Willow Garage, we believe in the power of the web enabling new applications in robotics. The web browser, taking advantage of emerging HTML5 web standards such as WebGL, websockets, and unified video streaming, can be a powerful and versatile frontend for accessing, operating and gathering information from robots. Today we are announcing a set of new Open Source libraries for 3D visualization and interaction, promoting the development of new web-based frontends for ROS systems.
On the robot side, dedicated ROS nodes throttle the transmission of TF information and provide precomputed transforms for the Interactive Markers client. The 3D meshes and textures needed for the robot model are provided by an HTTP file server. In addition, the depth and color information from the Kinect is jointly encoded into a compressed video stream that is provided via HTTP. In order to increase the dynamic range of the streamed depth image, it is split into two individual frames that encode the captured depth information from 0 to 3 meters and from 3 to 6 meters, respectively. Furthermore, compression artifacts are reduced by filling areas of unknown depth information with interpolated sample data. A binary mask is used to detect and omit these samples during decoding. Once this video stream is received by the web browser, it is assigned to a WebGL texture object which allows for fast rendering of the point cloud on the GPU. Here, a vertex shader is used to reassemble the depth and color data followed by generating a colored point cloud. In addition, a filter based on local depth variance is used to further reduce the impact of video compression distortion.
At ROSCON 2013, we integrated MoveIt! with the Baxter Research Robot from Rethink Robotics. Baxter has a sonar array providing some 3D information about the environment, but this information was not integrated with MoveIt! The generated motions from MoveIt! still avoid self-collisions (including collisions between the two arms). We would like to thank Rethink Robotics for allowing us to integrate with their robot, especially Joe Romano, Albert Huang, Christopher Gindel and Matthew Williamson for helping with the actual integration.
For more information about MoveIt!: see moveit.ros.org
The Columbia University Robotics Lab team at this year's Cornell Cup used a Wilow Garage Velo2G Gripper prototype for their Assistive Robotics project. Their goal was to build a low-cost robotic arm, controlled by facial muscles. Due to its intrinsic compliance and adaptability, the Velo2G was a great fit. More details, including interviews with the Columbia University team members, can be found on Engadget's website.
Willow Garage is proud to announce the initial release of MoveIt! : new software targeted at allowing you to build advanced applications integrating motion planning, kinematics, collision checking with grasping, manipulation, navigation, perception, and control. MoveIt! is robot agnostic software that can be quickly set up with your robot if a URDF representation of the robot is available. The MoveIt! Setup Assistant lets you configure MoveIt! for any robot, allowing you to visualize and interact with the robot model.
MoveIt! can incorporate both actual sensor data and simulated models to build an environment representation. Sensor information (3D) can be automatically integrated realtime in the representation of the world that MoveIt! maintains. CAD models can also be imported in the same world representation if desired. Collision-free motion planning, execution and monitoring are core capabilities that MoveIt! provides for any robot. MoveIt! updates its representation of the environment on the fly, enabling reactive motion planning and execution, which is essential for applications in human-robot collaborative environments.
MoveIt! interfaces with controllers through a standard ROS interface, allowing for ease of inter-operability, i.e. the ability to use the same higher-level software with a variety of robots without needing to change code. MoveIt! is architected to be flexible, using a plugin architecture to allow users to integrate their own custom components while still providing out-of-the-box functionality using default implementations. Furthermore, the ROS communication and configuration layer of MoveIt! is separated from core computational components such as motion planning or collision checking, the latter components being provided separately as C++ libraries.
Workspace analysis tools allow robot designers to test out the capabilities of their robot designs before building the hardware, using environment and object specific task specifications to quantify the workspace characteristics of different designs. This reduces costly mistakes and iterations in the design stage. We are actively working on completing the pick and place capabilities in MoveIt!, integrating with object recognition, perception, and grasping to allow manipulators to execute generalized pick and place actions.
More Information about MoveIt!, including instructions on how to get and use it, can be found on the MoveIt! website. MoveIt! is currently an alpha release.
Catch the MoveIt! team at ICRA 2013 and ROSCON:
- ICRA Booth Demo: The Willow Garage Booth will have a MoveIt! demo as part of the exhibit. The booth is open on Tuesday, Wednesday and Thursday (May 7-9, 2013).
- ICRA Workshop Talk: Sachin Chitta is giving a talk on "MoveIt!: Software for Rapid Development of New Robotics Applications" at the ICRA Industrial Mobile Assistance Robots Workshop on Monday, May 6, 2013.
- ICRA Tutorial: MoveIt! will be presented at a tutorial on Friday May 10, 2013: Motion Planning for Mobile Manipulation: State-of-the-art Methods and Tools, organized by Sachin Chitta, Ioan Sucan, Mark Moll, Lydia Kavraki and Maxim Likhachev.
- ROSCON Keynote Talk: Sachin Chitta, Ioan Sucan and Acorn Pooley will be at ROSCON presenting MoveIt! at 9:30 AM on Saturday May 10, 2013.
Willow Garage gratefully acknowledges the contributions of the following people to MoveIt! and associated packages that MoveIt! uses and depends on:
- Lydia Kavraki, Mark Moll, and associated members of the Kavraki Lab (Rice University) for developing OMPL - a suite of randomized planners that MoveIt! uses extensively.
- Dinesh Manocha and Jia Pan of UNC Chapel Hill for developing FCL - a package of collision checking algorithm used extensively by MoveIt!
- Maxim Likhachev (CMU), Ben Cohen (Penn) and Mike Phillips (CMU) for developing SBPL, a search-based planning library integrated with MoveIt!
- Armin Hornung, Kai Wurm, Maren Bennewitz, Cyril Stachniss, and Wolfram Burgard for developing Octomap - software for 3D occupancy mapping used by MoveIt!
- Mrinal Kalakrishnan, Peter Pastor and Stefan Schaal at USC for developing STOMP, the distance field components in MoveIt! and the implementation of the CHOMP algorithm in Arm Navigation
- Dave Coleman from the University of Colorado, Boulder for developing the MoveIt! Setup Assistant and adding documentation to the MoveIt! website.
MoveIt! evolved from the Arm Navigation and Grasping Pipeline components of ROS and we gratefully acknowledge the seminal contributions of all developers and researchers to those packages, especially Edward Gil Jones, Matei Ciocarlie, Kaijen Hsiao, Adam Leeper, and Ken Anderson.
We also acknowledge the contributions of the Willow Garage interns who have worked on MoveIt!, Arm Navigation and associated components, members of the ROS and PR2 communities who have used, provided feedback and provided contributions to MoveIt! and Arm Navigation and members of the ROS community for developing the infrastructure that MoveIt! builds on.
We also acknowledge the contributions of the ROS-Industrial consortium led by the Southwest Research Institute for supporting and building up infrastructure for applying MoveIt! and Arm Navigation to industrial robots and environments. Similarly, we acknowledge the contributions of Fraunhofer IPA to MoveIt! and support for the ROS-Industrial effort in Europe.
For more information visit moveit.ros.org
During his internship at Willow Garage David Lu from Washington University in St. Louis spent the first three months of 2013 improving the navigation stack, a solution that many robots use to move around without colliding into obstacles. Specifically, he made the costmap functionality more flexible to allow for custom adjustments to be made, allowing for the robot to navigate with increased awareness about specific things in its context, like the presence of people.
The costmap is the data structure that represents places that are safe for the robot to be in a grid of cells. Usually, the values in the costmap are binary, representing free space or places where the robot would be in collision. The ROS Navigation stack had the capacity to represent intermediate values, but with the exception of some values to make sure the robot's didn't drive immediately next to obstacles, it primarily used the two values.
The new structure created by David allows for extensive customization of the values that go into the costmap. The different parts of the costmap (the static map, the sensed obstacles and the inflated areas) are all separated into distinct layers of the costmap. Each layer is represented as a ROS plugin that can be compiled independently. Through the parameter server, users can specify additional plugins to be included with functionality they can design.
One use case of special interest for David and his collaborators was the personal space case mentioned above. By integrating a special "social" costmap plugin, the values around sensed people is increased proportional to a normal distribution, causing the robot to tend to drive further away from the person. By taking these proxemic concerns and other social navigation issues into account, David looks to improve human-robot interaction by making the navigation stack create more friendly navigation behaviors.
During his internship at Willow Garage, Eric Christiansen from UCSD worked on optimizing computer vision by developing efficient and accurate algorithms for describing and matching local regions of images.
Local image descriptors enable robots to comprehend what they see by describing an image as a set of small and relatively simple parts. These local descriptors can then be matched against datasets of labeled objects which enables new objects to be identified. They can also be re-identified across views of the same scene, to track motion or infer 3D geometry.
By restricting descriptor creation and matching to integer math, Eric and his collaborators created a descriptor which runs efficiently on low-power devices, such as mobile phones and small robots. Also, by creating a technique for very accurate scale and rotation estimation, Eric and his collaborators created another descriptor with an extremely high matching accuracy.
These advances in speed and accuracy should enable robots to see faster and better than they were previously able.
In addition, Eric developed two open source projects during his time at Willow. The first, an automatically-generated Java wrapper for OpenCV, has been previously mentioned, and should make it easier for computer vision researchers to reuse code. The second, Billy Pilgrim (named for a Kurt Vonnegut character) is a framework for evaluating local descriptors. Unlike previous frameworks, this framework integrates with the popular OpenCV library and runs seamlessly on a desktop or cluster. Tools like these will hopefully drive innovation by providing a common platform upon which to develop and test new ideas.
During his internship at Willow Garage, Jonathan Brookshire from MIT developed a set of techniques to assist robot teleoperators with simple manipulation tasks.
While teleoperating a robot during a manipulation task, a human operator is typically given full control over the robot’s end-effector. Although full control is generally seen as advantageous, it can often provide too much freedom for the human operator. When inserting a peg into a hole, for example, the peg can really only be moved up and down (e.g., motions to the left or right impossible). We create a system where the user can define simple geometries and relationships to restrict certain kinds of motion.
Inspired by 3D modeling techniques, we create an interface where the human operator can define geometries (lines, planes, cylinders) and constraints between these geometries. For example, the operator might specify a line affixed to a tool held by the robot and a plane affixed to a table top. A simple perpendicular constraint can then be used to require that the tool always remain vertical to the table. While the autonomous system constantly maintains the constraint, the user retains control of the remaining freedoms via teleoperation.
Our goal with these technologies is to simplify teleoperated manipulation. We enable the user to create simple constraints in real time and autonomously enforce those constraints, allowing them to focus on the relevant degrees of freedom.
For more information on our work on teleoperation please visit here.