Willow Garage Blog

May 12, 2013

At ROSCON 2013, we integrated MoveIt! with the Baxter Research Robot from Rethink Robotics. Baxter has a sonar array providing some 3D information about the environment, but this information was not integrated with MoveIt! The generated motions from MoveIt! still avoid self-collisions (including collisions between the two arms). We would like to thank Rethink Robotics for allowing us to integrate with their robot, especially Joe Romano, Albert Huang, Christopher Gindel and Matthew Williamson for helping with the actual integration. 

For more information about MoveIt!: see moveit.ros.org

May 8, 2013

The Columbia University Robotics Lab team at this year's Cornell Cup used a Wilow Garage Velo2G Gripper prototype for their Assistive Robotics project. Their goal was to build a low-cost robotic arm, controlled by facial muscles. Due to its intrinsic compliance and adaptability, the Velo2G was a great fit. More details, including interviews with the Columbia University team members, can be found on Engadget's website.

May 6, 2013

Willow Garage is proud to announce the initial release of MoveIt! : new software targeted at allowing you to build advanced applications integrating motion planning, kinematics, collision checking with grasping, manipulation, navigation, perception, and control. MoveIt! is robot agnostic software that can be quickly set up with your robot if a URDF representation of the robot is available. The MoveIt! Setup Assistant lets you configure MoveIt! for any robot, allowing you to visualize and interact with the robot model.

MoveIt! can incorporate both actual sensor data and simulated models to build an environment representation. Sensor information (3D) can be automatically integrated realtime in the representation of the world that MoveIt! maintains. CAD models can also be imported in the same world representation if desired. Collision-free motion planning, execution and monitoring are core capabilities that MoveIt! provides for any robot. MoveIt! updates its representation of the environment on the fly, enabling reactive motion planning and execution, which is essential for applications in human-robot collaborative environments.
 
MoveIt! interfaces with controllers through a standard ROS interface, allowing for ease of inter-operability, i.e. the ability to use the same higher-level software with a variety of robots without needing to change code. MoveIt! is architected to be flexible, using a plugin architecture to allow users to integrate their own custom components while still providing out-of-the-box functionality using default implementations. Furthermore, the ROS communication and configuration layer of MoveIt! is separated from core computational components such as motion planning or collision checking, the latter components being provided separately as C++ libraries.

Workspace analysis tools allow robot designers to test out the capabilities of their robot designs before building the hardware, using environment and object specific task specifications to quantify the workspace characteristics of different designs. This reduces costly mistakes and iterations in the design stage. We are actively working on completing the pick and place capabilities in MoveIt!, integrating with object recognition, perception, and grasping to allow manipulators to execute generalized pick and place actions.

Get MoveIt!

More Information about MoveIt!, including instructions on how to get and use it, can be found on the MoveIt! website. MoveIt! is currently an alpha release.

Catch the MoveIt! team at ICRA 2013 and ROSCON:

  • ICRA Booth Demo: The Willow Garage Booth will have a MoveIt! demo as part of the exhibit. The booth is open on Tuesday, Wednesday and Thursday (May 7-9, 2013).
  • ICRA Workshop Talk: Sachin Chitta is giving a talk on "MoveIt!: Software for Rapid Development of New Robotics Applications" at the ICRA Industrial Mobile Assistance Robots Workshop on Monday, May 6, 2013.
  • ICRA Tutorial: MoveIt! will be presented at a tutorial on Friday May 10, 2013: Motion Planning for Mobile Manipulation: State-of-the-art Methods and Tools, organized by Sachin Chitta, Ioan Sucan, Mark Moll, Lydia Kavraki and Maxim Likhachev.
  • ROSCON Keynote Talk: Sachin Chitta, Ioan Sucan and Acorn Pooley will be at ROSCON presenting MoveIt! at 9:30 AM on Saturday May 10, 2013.

Acknowledgements

Willow Garage gratefully acknowledges the contributions of the following people to MoveIt! and associated packages that MoveIt! uses and depends on:

  • Lydia Kavraki, Mark Moll, and associated members of the Kavraki Lab (Rice University) for developing OMPL - a suite of randomized planners that MoveIt! uses extensively.
  • Dinesh Manocha and Jia Pan of UNC Chapel Hill for developing FCL - a package of collision checking algorithm used extensively by MoveIt!
  • Maxim Likhachev (CMU), Ben Cohen (Penn) and Mike Phillips (CMU) for developing SBPL, a search-based planning library integrated with MoveIt!
  • Armin Hornung, Kai Wurm, Maren Bennewitz, Cyril Stachniss, and Wolfram Burgard for developing Octomap - software for 3D occupancy mapping used by MoveIt!
  • Mrinal Kalakrishnan, Peter Pastor and Stefan Schaal at USC for developing STOMP, the distance field components in MoveIt! and the implementation of the CHOMP algorithm in Arm Navigation
  • Dave Coleman from the University of Colorado, Boulder for developing the MoveIt! Setup Assistant and adding documentation to the MoveIt! website.

MoveIt! evolved from the Arm Navigation and Grasping Pipeline components of ROS and we gratefully acknowledge the seminal contributions of all developers and researchers to those packages, especially Edward Gil Jones, Matei Ciocarlie, Kaijen Hsiao, Adam Leeper, and Ken Anderson.

We also acknowledge the contributions of the Willow Garage interns who have worked on MoveIt!, Arm Navigation and associated components, members of the ROS and PR2 communities who have used, provided feedback and provided contributions to MoveIt! and Arm Navigation and members of the ROS community for developing the infrastructure that MoveIt! builds on.

We also acknowledge the contributions of the ROS-Industrial consortium led by the Southwest Research Institute for supporting and building up infrastructure for applying MoveIt! and Arm Navigation to industrial robots and environments. Similarly, we acknowledge the contributions of Fraunhofer IPA to MoveIt! and support for the ROS-Industrial effort in Europe.

For more information visit moveit.ros.org

May 3, 2013

During his internship at Willow Garage David Lu from Washington University in St. Louis spent the first three months of 2013 improving the navigation stack, a solution that many robots use to move around without colliding into obstacles. Specifically, he made the costmap functionality more flexible to allow for custom adjustments to be made, allowing for the robot to navigate with increased awareness about specific things in its context, like the presence of people. 

The costmap is the data structure that represents places that are safe for the robot to be in a grid of cells. Usually, the values in the costmap are binary, representing free space or places where the robot would be in collision. The ROS Navigation stack had the capacity to represent intermediate values, but with the exception of some values to make sure the robot's didn't drive immediately next to obstacles, it primarily used the two values. 

The new structure created by David allows for extensive customization of the values that go into the costmap. The different parts of the costmap (the static map, the sensed obstacles and the inflated areas) are all separated into distinct layers of the costmap. Each layer is represented as a ROS plugin that can be compiled independently. Through the parameter server, users can specify additional plugins to be included with functionality they can design. 

One use case of special interest for David and his collaborators was the personal space case mentioned above. By integrating a special "social" costmap plugin, the values around sensed people is increased proportional to a normal distribution, causing the robot to tend to drive further away from the person. By taking these proxemic concerns and other social navigation issues into account, David looks to improve human-robot interaction by making the navigation stack create more friendly navigation behaviors. 

These changes will be integrated into the core navigation stack in an upcoming distro. For more information, see the costmap and navigation wiki pages.

 

April 12, 2013

During his visit, François Ferland from ntRoLab at Université de Sherbrooke in Québec, Canada was able to integrate a microphone line array into the PR2. A microphone array can enhance speech and sound recognition in real-world settings by allowing the localization, tracking, and separation of multiple simultaneous sources. The 8Sounds and ManyEars project is a complete open hardware and software solution that enables robots to hear sounds from their environment and know where they came from. It consists of an 8 microphone array, a compact low-power USB sound card (8 Sounds), and a software package (ManyEars) to locate the orientation of up to 4 simultaneous sources and separate the signal of up to two of these sources.

Both halves of the project have been developed at IntRoLab from Université de Sherbrooke in Québec, Canada. 

8Sounds is a low cost, USB-powered sound card with 8 mono inputs and one stereo output. The card was designed with undergrad students to solve problems we encountered when using traditional, musician-oriented sound cards. These cards are usually bulky, require external power supplies, and have features such as MIDI ports that are not typically needed in robotic applications. Our solution is compact, draws very little power, and is easy to install on a robot. The 8Sounds system also includes powered, omnidirectional electret microphones with differential signaling for low noise acquisition. The sound card uses the USB Audio Class 2.0 protocol, and does not require special drivers in either Linux or Mac OS X. 

The ManyEars algorithm, running on any conventional PC and released under the GNU Public License (GPL), performs real-time beamforming for localization, particle filtering for tracking, and Geometric Source Separation (GSS). It’s an easy to integrate C library and a ROS package is also available. A Qt-based GUI is also available for tuning parameters.

This demonstration shows the integration of the whole system on a PR2. The microphones have been installed on the upper part of its torso and the 8Sounds card is tucked under the shell, below the PR2’s network router. The microphones do not have to be in a particular geometrical configuration or orientation, only their position relative to a (user-chosen) reference point is needed.

The 8Sounds and ManyEars system is simple to integrate into many types of situations where sound localization and tracking is needed, such as human-robot interaction. It can be installed on robots as small as the TurtleBot and as complex as the PR2. By being both open hardware and open software, robot developers can easily customize and enhance any part of the system to their applications.

For more details on the system please refer to F. Grondin, D. Létourneau, F. Ferland, V. Rousseau, F. Michaud (2013), “The ManyEars Open Framework - Microphone array open software and open hardware system for robotic applications”, Autonomous Robots, 34:217-232.

The related ROS packages are available here. The PR2-specific demonstration code is currently under the “jn0_patches” branch, but will eventually be merged into the master branch. 

For more information visit the project website for links to schematics, BOMs, and relevant software repositories.
April 3, 2013

During his internship at Willow Garage, Eric Christiansen from UCSD worked on optimizing computer vision by developing efficient and accurate algorithms for describing and matching local regions of images.

Local image descriptors enable robots to comprehend what they see by describing an image as a set of small and relatively simple parts. These local descriptors can then be matched against datasets of labeled objects which enables new objects to be identified. They can also be re-identified across views of the same scene, to track motion or infer 3D geometry.

By restricting descriptor creation and matching to integer math, Eric and his collaborators created a descriptor which runs efficiently on low-power devices, such as mobile phones and small robots. Also, by creating a technique for very accurate scale and rotation estimation, Eric and his collaborators created another descriptor with an extremely high matching accuracy.

These advances in speed and accuracy should enable robots to see faster and better than they were previously able.

In addition, Eric developed two open source projects during his time at Willow. The first, an automatically-generated Java wrapper for OpenCV, has been previously mentioned, and should make it easier for computer vision researchers to reuse code. The second, Billy Pilgrim (named for a Kurt Vonnegut character) is a framework for evaluating local descriptors. Unlike previous frameworks, this framework integrates with the popular OpenCV library and runs seamlessly on a desktop or cluster. Tools like these will hopefully drive innovation by providing a common platform upon which to develop and test new ideas.

March 27, 2013

During his internship at Willow Garage, Jonathan Brookshire from MIT developed a set of techniques to assist robot teleoperators with simple manipulation tasks.  

While teleoperating a robot during a manipulation task, a human operator is typically given full control over the robot’s end-effector.  Although full control is generally seen as advantageous, it can often provide too much freedom for the human operator.  When inserting a peg into a hole, for example, the peg can really only be moved up and down (e.g., motions to the left or right impossible).  We create a system where the user can define simple geometries and relationships to restrict certain kinds of motion.

Inspired by 3D modeling techniques, we create an interface where the human operator can define geometries (lines, planes, cylinders) and constraints between these geometries.   For example, the operator might specify a line affixed to a tool held by the robot and a plane affixed to a table top.  A simple perpendicular constraint can then be used to require that the tool always remain vertical to the table.  While the autonomous system constantly maintains the constraint, the user retains control of the remaining freedoms via teleoperation. 

Our goal with these technologies is to simplify teleoperated manipulation.  We enable the user to create simple constraints in real time and autonomously enforce those constraints, allowing them to focus on the relevant degrees of freedom.

For more information on our work on teleoperation please visit here.

March 21, 2013

In order for robots to work well in human environments they must be able to plan for and execute many different types of manipulation.

During her internship last spring at Willow Garage, Jenny Barry from MIT implemented an algorithm for planning with multiple types of manipulation on the PR2.  This algorithm takes as input a starting configuration of the robot and objects, and outputs a sequence of robot trajectories corresponding to different types of manipulation.  For example, the planner could return a plan that first moves the arm to pick up an object, then moves the base over to another table, and then moves the arm again to place the object on the table.

By planning for multiple primitives at once we are able to ensure that our initial actions do not preclude later actions.  For example, we can guarantee that the grasp used in a pick does not collide with the environment during the place.  We also can plan for types of manipulation during which the object is not rigidly attached to the robot, such as pushing, sliding or throwing.  Currently, Jenny has implemented Pick, Place, and Push on the PR2.

For more information visit here for more info on the darrt planner and J. Barry, K. Hsiao, L. Kaelbling, T. Lozano-Perez.  Manipulation with Multiple Action Types.  ISER 2012

 

March 6, 2013

 

Tommaso Cavallari, a masters student from the University of Bologna spent his fall internship at Willow Garage working on the recognition and tracking of moving objects on a rotating platform.

He focused on enabling robots to recognize a platform and it's axis of rotation, recognize objects placed on the platform, and understand how those objects rotate over time.

There were a few challenges to overcome during the execution of this task, such as the relative motion between the camera that observes the scene and the objects, obscured objects because of clutter, and altering objects in the scene by other robots or people. 

By the end of his internship Tommaso has been able to develop a solution that can detect one or more objects rotating around a platform. In addition, his solution finds the geometric parameters that describe this movement. In the future, his work will allow robots to figure out how to reliably pick up an object even if it’s moving by either predicting where the object will be located at a specific time or by following the object with the arm while it moves.

Head over github for more information and access to the source code.

 

February 19, 2013

 

OpenCV now has bindings for desktop Java, updating the set of supported languages to C / C++, Python, Android, Java and any JVM language which interops with Java, such as Scala and Clojure. The existing Android Java API has recently been extended to support desktop Java as well. Unlike the popular JavaCV project, these bindings are automatically generated by parsing the OpenCV C++ headers. This has two major advantages:

  • The Java wrapper is automatically kept up-to-date.
  • The bindings closely match the original C++ interface. More information is available in the wiki page and in the new tutorial.

Thanks for this work go to Willow Garage intern Eric Christiansen along with Andrey Pavlenko and Andrey Kamaev of Itseez.