Willow Garage Blog
We are proud to introduce TurtleBot, a unique combination of state-of-the-art technology in a hobby platform. In keeping with Willow Garage's mission to bring personal robotics to the home, we feel that the time is ripe to put a low-cost, personal robot kit with open-source software in the hands of hobbyists and developers.
With TurtleBot, you'll be able to build a robot that can drive around your house, see in 3D, and have enough horsepower to create exciting applications. The main hardware includes:
- iRobot Create: mobile base that has been an effective platform for robotics in education.
- Microsoft Kinect: camera and 3D sensor in one package.
- Asus Eee PC 1215N dual-core Atom notebook: powerful enough to handle the demands of 3D data.
- Low-cost gyro: enhances the TurtleBot's ability to navigate around the home.
If you're tired of wiring and soldering to get your robot up and running, don't worry -- the TurtleBot assembles quickly with just a single screwdriver included in the kit.
The TurtleBot comes with an open-source, ROS-based TurtleBot SDK that lets you get the most of the hardware. The TurtleBot SDK integrates the hardware drivers with developer tools and high-level capabilities like autonomous navigation. You'll be able to develop apps from day one that build on powerful computer vision libraries like OpenCV and PCL. You'll also be able to access the thousands of libraries that the ROS community has built and share your code with the rest of the TurtleBot community.
With TurtleBot, we are adding a new dimension of possibilities to your Kinect hacking: the ability to drive. TurtleBot can explore your house on its own, build 3D pictures, take panoramas, and more. Checkout the results of our ROS 3D Contest to see some of the exciting possibilities for the Kinect in robotics.
There are two ways to bring a TurtleBot home. If you already have an iRobot Create and a laptop, you can purchase the TurtleBot Core kit for $499.99. The TurtleBot Core kit includes:
- USB Communications Cable
- TurtleBot Power and Sensor Board
- TurtleBot Hardware
- Microsoft Kinect
- TurtleBot to Kinect Power Cable
- USB Stick TurtleBot Installer
- #10 Torx Allen Key
The TurtleBot Complete kit sells for $1199.99 and includes everything you need to get started:
- TurtleBot Core Kit
- iRobot Create Robot
- 3000 mAh Ni-MH Battery
- Fast Charger
- ASUS Eee PC 1215N
We look forward to see what you can make TurtleBot do!
We will be at RoboGames 2011, which is being held this weekend at the San Mateo Fairgrounds. Stop by Saturday and Sunday to see our new TurtleBot platform. We're very excited to have some honored Robots Using ROS guests: Pi Robot, Maxwell, and VeltroBot will be making appearances in our booth. Artist Josh Ellingson has designed some new stickers and posters that we will be giving away in our booth.
Rosie Li from Washington University of St. Louis spent her internship at Willow Garage working on Point Cloud Library (PCL). Rosie implemented a novel surface reconstruction algorithm for general three-dimensional point cloud data. To find out more, pllease watch the video above. You can also read the slides below (download pdf) for more technical details. The software is available as open source in PCL.
Dirk Holz from the University of Bonn in Germany spent his internship at Willow Garage working on the Point Cloud Library (PCL). He implemented a set of modular components for registering point clouds to create three-dimensional models of objects and environments. The work on the registration part of PCL is a joint effort with other researchers from the PCL community and an ongoing project. Please watch the video above for first demonstrations of what is already achievable or read the slides below (download pdf) for more technical details. The software is available as open source part of the PCL project.
If your idea of a good summer job is one that's intellectually stimulating, boosts your resume and pays well, look no further. The Point Cloud Library (PCL) and OpenCV computer vision libraries have been accepted as mentoring organizations for this year's Google Summer of Code, which offers student developers stipends to work on open source software. This year, PCL has several summer projects that we're excited about working on, including:
- Point cloud registration
- Real-time segmentation and tracking
- Geometric object recognition
- Surface reconstruction with textures
- and more!
If you're a student with good programming skills and an interest in computer graphics, computer vision and 3D cameras such as the Kinect, then we'd love to have your help. Friday, April 8 is the last day to submit your application, so apply today for PCL and OpenCV!
The Point Cloud Library (PCL) moved today to its new home at PointClouds.org. Now that quality 3D point cloud sensors like the Kinect are cheaply available, the need for a stable 3D point cloud-processing library is greater than ever before. This new site provides a home for the exploding PCL developer community that is creating novel applications with these sensors.
PCL contains numerous state-of-the art algorithms for 3D point cloud processing, including filtering, feature estimation, surface reconstruction, registration, model fitting and segmentation. These algorithms can be used, for example, to filter outliers from noisy data, stitch 3D point clouds together, segment relevant parts of a scene, extract keypoints and compute descriptors to recognize objects in the world based on their geometric appearance, and create surfaces from point clouds and visualize them -- to name a few.
First Anniversary: a brief history of PCL
This new site also celebrate the one year anniversary of PCL. The official development for PCL started in March 2010 at Willow Garage. Our goal was to create a library that can support the type of 3D point cloud algorithms that mobile manipulation and personal robotics need, and try to combine years of experience in the field into coherent framework. PCL's grandfather, Point Cloud Mapping, was developed just a few months earlier, and it served as an important building block in Willow Garage's Milestone 2. Based on these experiences, PCL was launched to bring world-class research in 3D perception together into a single software library. PCL would enable developers to harness the potential of the quickly growing 3D sensor market for robotics and other industries.
For this occasion, we put together a video that present the development of PCL over time.
Towards 1.0: PCL and Kinect
The launch of the Kinect sensor in November 2010 turned many eyes on PCL, and its user community quickly multiplied. We turned our focus on stabilizing and improving the usability of PCL so that users would be able to develop applications on top. We are now proud to announce that the upcoming release of PCL features a complete Kinect (OpenNI) camera grabber, which allows users to get data directly in PCL and operate on it. PCL has already been used by many of the entries in the ROS 3D contest, showing the potential of Kinect and ROS. Please check our website for tutorials on how to visualize and integrate Kinect data directly in your application.
The PCL development team is current working hard towards a 1.0 release. PCL 1.0 will focus on modularity and enable deployment of PCL on different computational devices.
A Growing Community
We are proud to be part of an extremely active community. Our development team spawns over three continents and five countries, and it includes prestigious engineers and scientists from institutions such as: AIST, University of California Berkeley, University of Bonn, University of British Columbia, ETH Zurich, University of Freiburg, Intel Research Seattle, LAAS/CNRS, MIT, nVIDIA, University of Osnabrück, Stanford University, University of Tokyo, TUM, Vienna University of Technology, Willow Garage, and Washington University in St. Louis.
PCL wouldn't have become what it is today without the help of many people. Thank you to our tremendous community, especially our contributors and developers who have worked so hard to make PCL more stable, more user friendly, and better documented. We hope that PCL will help you solve more 3D perception problems, and we look forward to your contributions!
This Wednesday and Thursday, the esteemed magazine The Economist is hosting their second annual Ideas Economy: Innovation event. Technologists, politicians and thinkers from all over the world are descending on the U.C. Berkeley Campus for a discussion on all things Innovation. You can follow the event on Twitter (@ideaseconomy, #ideaseconomy) and fora.tv.
Our own CEO, Steve Cousins, will be interviewed on Thursday along with Jaron Lanier. As part of the lead up to the event, The Economist asked Steve and others to respond to three questions. Here are Steve's responses:
1. What is the proper role for government in catalyzing innovation and entrepreneurship?
Government should make strategic long-term investments in science and industry that promise to grow industry and help entrepreneurs thrive. If there isn't already something called the 'Mosaic Model' (after the first Web browser), then I'd argue that there should be. In 1991, Congress passed The High Performance Computing and Communication Act (HPCA), a bill introduced by then Senator Al Gore. This in turn led to the High-Performance Computing and Communications Initiative, a program whose funding wound up at the University of Illinois Urbana-Champaign. In the talented hands of Marc Andreessen and others, Mosaic was introduced to the World Wide Web in 1992. Mosaic led to Netscape, and here we are twenty years later. The impact of this bill on the world today can't be understated.
If there's a change to that model I would recommend it is to invest in such a way that an innovation can't be locked up by an individual or a single corporation. Government-backed initiatives should result in open innovation that benefits all citizens, streamlining the process so that entrepreneurs can combine and extend results to bring value to everyone. The fact that Mosaic was generously shared was a key factor in the growth of the Web.
2. Is American innovation in decline, a.k.a., is power shifting? Why or why not?
The reality is that geography and national borders matter a lot less than they used to, and that innovation is in a constant state of flux around the world. While this may feel like a threat to U.S. business – and Silicon Valley, in particular – I don't see any evidence that American innovation is in decline. Arguably the two most significant developments from the past five years – the growth of Google's Android and Apple's iOS -- took place right in our innovation back yard. Try to imagine the growth of the smart-phone industry without these contributions.
The United States no longer has a stranglehold on innovation, but the assumption that innovation is Made in America is a very provincial myth. Yes, there's something unique about American culture that fosters innovation, but why does a cultural proclivity have to imply ownership? Innovation will ebb and flow across borders, but it's more accurate to say that innovation is accelerating worldwide, and that U.S. innovation continues to grow with this trend.
3. Are entrepreneurs born or made?
Entrepreneurs are made. Genetic factors like intelligence are distributed equally around the world, but education and opportunity are not. Entrepreneurship is environmental, and requirements for successful entrepreneurship – by and large- congregate in a small group of centers around the world. Silicon Valley has a disproportionate share of high-tech innovation due to the confluence of great educational institutions (Stanford, Berkeley, for example), access to capital up and down Sand Hill Road, and a workforce that has evolved to take economic risks.
You can identify these environments by looking at their output in terms of number of start-ups, and the economic impact those start-ups have. Innovation centers nurture entrepreneurs owing to the concentrated access to experience, support systems, and capital. At a certain point, a virtuous cycle develops where successful entrepreneurs are there to support the next generation.
Working at Willow Garage in an office full of robots running around may sound like a lot of fun (and it usually is), but sometimes it’s just plain confusing. Should I move out of the way for this PR2 or did it already reach its destination? Is that PR2 trying to open that door or is it just sitting idle?
By working with and around robots every day, we frequently stumble upon human-robot interaction design challenges and research inspiration. There is clearly a design challenge in communicating internal robot states (e.g., goals, task status) and requests (e.g., to persuade people to step aside) to effectively reach the robot’s assigned goals. On the human side, this type of communication can also make robot behaviors more predictable (thereby, safer and less startling) and maybe even more appealing.
This week, we’ll be presenting the results of our research on nonverbal behaviors that robots use at the HRI 2011 conference in Lausanne, Switzerland. In particular, the presentation will empirically demonstrate the animation talents of Pixar animator Doug Dooley, our coauthor (paper). Unfortunately, it’s difficult to share animations of robots, particularly since robots have different body forms and kinematics.
On the other hand, it’s very easy to share robot sounds. In collaboration with sound designer EJ Holowicki, we’ve created a set of sound libraries for communication between people and robots. One of the lessons we’ve learned from this iterative sound design process is that almost no one agrees on what “voice” is best. Therefore, we’ve provided a set of options for you to try out.
Enjoy! These sound libraries are licensed via Creative Commons (CC0) so that you can feel free to use them.
We're looking foward to seeing you in Lausanne, Switzerland at HRI 2011 from March 6-9, 2011! If you're interested in checking out what Willow Garage has been up to lately, come check out our research talks and posters.
Research Paper Presentations
Monday, March 7, 2011
During the morning session, Jenay M. Beer will be presenting her work as she talks about Supporting successful aging with mobile remote presence systems. Jenay M. Beer, Leila Takayama
During the afternoon session, Leila Takayama will be presenting her work as she talk about Expressing thought: Improving robot readability with animation principles. Leila Takayama, Doug Dooley, Wendy Ju
Monday, March 7, 2011
18:00-20:00 Enjoy the reception while checking out the posters.
RIDE: Mixed-Mode Control for Mobile Robot Teams Erik Karulf1, Marshall Strother1, Parker Dunton1, and William D. Smart1,2 (1) Washington University, St. Louis, (2) Willow Garage, Inc.
User Observation & Dataset Collection for Robot Training Caroline Pantofaru Willow Garage, Inc.
Using Depth Information to Improve Face Detection Walker Burgin1, Caroline Pantofaru2, and William D. Smart1,2 (1) Washington University, St. Louis, (2) Willow Garage, Inc
A Panorama Interface for Telepresence Robots Daniel A. Lazewatsky1 and William D. Smart1,2 (1) Washington University, St. Louis, (2) Willow Garage, Inc.
Polonius: A Wizard of Oz Interface for HRI Experiments David V. Lu1 and William D. Smart1,2 (1) Washington University, St. Louis, (2) Willow Garage, Inc.
ROS Diamondback has been released! This newest distribution of ROS gives you more drivers, more libraries, and more 3D processing. We've also worked on making it lighter and more configurable to help you use ROS on smaller platforms.
The Kinect is a game-changer for robotics and is used on ROS robots around the world. ROS is now integrated with OpenNI Kinect/PrimeSense drivers and the Point Cloud Library (PCL) 3D-processing library has a new stable release for Diamondback. The ROS 3D contest entries showed the many creative ways you can integrate the Kinect with your robot, and we will continue to work on making the Kinect easier to use with ROS. We've also redone our C++ APIs to make OpenCV easier-to-use in ROS.
A Growing Community, More Robots
Diamondback is the first ROS distribution release to launch with stacks from the broader ROS community. Thank you to contributors from UT Austin, Uni Freiburg, Bosch, ETH Zurich, KU Leuven, UMD, Care-O-bot, TUM, University of Arizona and CCNY for making their drivers, libraries, and tools available. Many more robots are easier-to-use with ROS thanks to their efforts.
Now that there are over 50 different robots able to use ROS -- mobile manipulators, UAVs, AUVs, and more -- we are providing robot-specific portals to give you the best possible "out of the box" experience. If you have a Roomba, Nao, Care-O-bot, Lego NXT robot, Erratic, miabotPro, or PR2, there are now central pages to help you install and get the most out of ROS. Look for more robots in the coming weeks and months to be added to that list.
We've reorganized ROS itself and broken it into four separate pieces: "ros", "ros_comm", "rx", and "documentation". This enables you to use ROS in both GUI and GUI-less configuration, which will enable you to install ROS on your robot with much smaller footprint. This separation will also assist with porting ROS to other platforms and integrating the ROS packaging system with non-ROS communication frameworks. For more details, see REP 100.
Since ROS C Turtle, we've adopted a new ROS Enhancement Proposal (REP) process to make it easier for you to propose changes to ROS. We have also transitioned ownership of the ROS camera drivers to Jack O'Quin at UT Austin and look forward to enabling more people in the outside community to have greater ownership over the key libraries and tools in ROS. Please see our handoff list to find ways to become more involved as a contributor.
We launched the ROS Answers several weeks ago to make it easier for you to get in touch with a community of ROS experts. It is quickly becoming the best knowledge base on ROS with over 200 questions in a wide variety of topics.
For more information, please see the Diamondback release notes. Some of the additional highlights include:
- Eigen 3 support, with compatibility for Eigen 2.
- camera1394 now supports nodelets and has been relicensed as LGPL.
- rosjava has been updated and is now maintained by Lorenz Moesenlechner of TUM.
- bond makes it easier to let two ROS nodes monitor each other for termination.
- rosh is a new experimental Python scripting environment for ROS.
- New nodelet-based topic tools
- PointCloud2 support in the ROS navigation stack.
From the numerous contributed stacks to patches to bug reports, this release would not have been possible without the help of the ROS community. In particular, we appreciate your help testing and improving the software and documentation for the Diamondback release candidates to make this the best release possible. We hope that Diamondback helps you get more done with your robots, and we look forward to your contributions in future releases.