Willow Garage Blog
CNRS Laboratory of Analysis and Architecture of Systems (LAAS-CNRS) in Toulouse, France owns a PR2 that they have successfully programmed in many human and robot interactions. Recently, LAAS had some fun with the PR2 and staged an original theater performance, titled Roboscopie.
In this stage play, the actor teaches the PR2 to "experience" common everyday items, such as a jacket, shelter, blue bottle, and phone ringing. Get a good laugh when these two exercise together on stage. Watch the short clip above or see the full-length version Roboscopie on YouTube to see how the PR2 and friend take center stage.
Update: for more information, check out the official Roboscopie website.
In a little less than three months, Yiping Liu from Ohio State University made a significant update to the
camera pose stack by making it possible to calibrate cameras that are connected by moving joints, and storing the result to a URDF file. The camera pose stack captures both the relative poses of the cameras and the state of the joints between the cameras. The optimizer can run over multiple camera poses and multiple joint states.
The goal was to calibrate multiple RGB cameras, such as the Microsoft Kinect, Prosilicas, webcams, and other mounted cameras, on a robot, relative to other cameras. The results are automatically added to the URDF of the robot.
Yiping set up a PR2 with a Kinect camera mounted on its head to demonstrate the calibration between the onboard camera and statically mounted cameras. The PR2 was directed close enough to the statically mounted camera and store the captured checker board pattern. The internal optimizer can produce better results with accumulated measurements.
In the other use case, the PR2 looks at itself in a previously mapped space. Yiping built a simple GUI in cameraposetoolkits for choosing the camera to calibrate. The PR2 moved in front of the selected camera and calibration was performed. The package publishes all calibrated camera frames to TF in realtime. You can watch Yiping’s camera calibration tests on his video.
Irene Rae from the University of Wisconsin Madison spent the summer of 2011 exploring ways to change the behavior in human robot interaction (HRI). Irene worked with the Texai that reduces the need for remote employees to travel to attend meetings or to spend long hours commuting to the office.
The person working remotely pilots the Texai and works with locals in the office. Locals sometimes treat the Texai as an object and rest their feet on the base, invade the pilot's sense of personal space, block the cameras, or stand where the pilot cannot see them. In other cases, locals ran away when they heard the robot coming or instead put "Kick me" signs on it. This treatment of the robot and pilot are symptomatic of infrahumanization, which is the tendency to treat someone thought of as an out-group member as less human than those viewed as part of the in group.
The study looked at how pilots of the Texai could be treated more like in-group members and humans rather than out-group members. The study tested ways to change this behavior through design or situational framing. To get locals to treat the pilot like part of the group, Irene Rei tested how decorating the Texai with the team's colors improved the pilot’s treatment. In another case, Irene spent time testing how verbally framing the situation with participants improved the HRI.
Increasing in-group feelings between the locals and the pilot can lead to better behavior toward the pilot, and higher levels of cooperation, collaboration, and team efficiency.
Assembling massive datasets from a large number of individual point clouds is an important part of mobile robotics research. This allows robots to see beyond their immediate surroundings, localize in both 2D and 3D, and share large-scale maps built by other robots. One of the challenges here is how to efficiently estimate and correct the pose error in the trajectory of the robot, without sacrificing accuracy. For example, correcting high-dimensional registration data graphs that represent a large building or a city can take a very long time.
During his internship, Jochen ported his registration framework called ELCH (Explicit Loop Closing Heuristic) into the Point Cloud Library framework. ELCH tries to correct collected sensor data by finding loops in the robot trajectory, estimating the pose error the robot accumulated while driving along the loop, using point cloud registration, and distributing the error over the complete robot pathway. Our hope is that using techniques like ELCH, we will be able to scale the type of environments mobile robots can operate in.
IEEE's Automaton blog has a great article on one of the more entertaining demonstrations at the PR2 Workshop at IROS: P.O.O.P. S.C.O.O.P. (Perception Of Offensive Products and Sensorized Control Of Object Pickup). Ben Cohen, Daniel Benamy, Anthony Cowley, Will McMahan, and Joe Romano, all from the GRASP Lab at Penn, outfitted PR2 with a pooper scooper and developed perception, navigation, and manipulation software to reliably detect and cleanup artificial pet messes. The software was able to achieve a reliability rate of 95% and was able to perform a live demonstration for the workshop audience.
We had a busy and fun time at this year's IROS 2011. Thanks to all of you who stopped by our booth and talks. A lot of great robotics research was on display and we had a great time at the PR2 Workshop, talks, and interactive presentations. The exhibition was very exciting and we put together a montage video to celebrate all of the robots in action. Enjoy!
Hungry but too focused on your coding to leave your lab? IEEE Spectrum has posted a video from the University of Tokyo JSK Lab and Technische Universität München that shows PR2 going all the way from the upstairs JSK Lab, down the elevator to a Subway restaurant, and back, all on its own. Instead of having pre-programmed knowledge of where to buy the sandwich, the PR2 is able to do a "semantic search" that makes inferences about what sandwiches are and where they can be purchased in order to complete the task.
This demonstration relies on several new PR2 capabilities, such as manipulating elevator panels and adding multi-floor features to the ROS navigation stack. JSK and TUM also collaborated to make this a great integration of EusLisp, KnowRob, and ROS. We're really excited to see members of the PR2 Beta Program community working together to achieve even more impressive results.
For more information, please see the IEEE Spectrum article.
We're looking foward to seeing you in San Francisco, USA at IROS 2011 from September 25 - 30, 2011! If you're interested in checking out what Willow Garage has been up to lately, come check out our research talks and workshops. We'll also be demoing throughout the entire conference, so please come talk to us in the Exhibits hall at both the demonstration sessions and the Willow Garage booth!
Sunday, September 25:
- Motion Planning for Real Robots: Chitta, Jones, Sucan, Moll, Kavraki
- 3D Point Cloud Processing: PCL (Point Cloud Library): Rusu, Dixon, Aldoma, Gedikli
Monday, September 26:
- Active Semantic Perception and Object Search in the Real World: Aydemir, Pronobis, Marthi, Jensfelt, Holz
Friday, September 30:
- The PR2 Workshop: Results, Challenges and Lessons Learned in Advancing Robotics With a Common Platform: Smart, Chitta, Pantofaru, Rusu
Sunday, September 25:
Tuesday, September 27:
- Panel: Robots, the Next Generation: Cousins
Wednesday, September 28:
- Should Robots or People Do These Jobs? a Survey of Robotics Experts and Non-Experts about Which Jobs Robots Should Do: Ju, Takayama
- Outlet Detection and Pose Estimation for Robot Continuous Operation: Eruhimov, Meeussen
Thursday, September 29:
- Hierarchies of Octrees for Efficient 3D Mapping: Wurm, Hennes, Holz, Rusu, Stachniss, Konolige, Burgard
Friday, September 30:
Given how pervasive the PR2 has become in academic institutions around the world, we thought it might be worth checking in with our PR2 community to see what research plans they have in place for the coming year. As always, we're inspired by the innovative research under way but we were frankly surprised by the breadth and ambition of these initiatives. In our goal to catalyze the personal robotics industry, the more R&D underway, the better. The following brief descriptions provide some insight into personal robot applications in the not-too-distant future. These include household tasks such as laundry and clean-up; robot to robot cooperation; navigating within human environments; and even dancing and pet-sitting.
In the next year, the team at Freiburg will be continuing the TidyUp project and start with the integration process. Currently, they are working on cleaning up items from tables and bringing them back to where they belong. The robots will wipe the tables and perhaps also other furniture such as shelves. They also plan to learn from human table settings to set the table again for a selected number of people attending a meal.
During the upcoming year, Bosch plans to continue pursuing both hardware and software developments. Bosch plans to continue development of their proximity sensor for safe teleoperation in dynamic environments. They also plan to create Web interfaces that can be used for multiple tasks as well as for different robots without additional coding. As part of their efforts on shared autonomy, Bosch plans to conduct a user study comparing different manipulation assistance interfaces, as well as release additional packages for shared autonomy task planning. Together with TUM, Bosch will release a pipeline for autonomous semantic mapping.
Along with newly-recruited faculty Gabe Sibley, Professor Evan Drumwright will be co-teaching a class this Fall on Autonomous Robots using the PR2 as their platform of focus. Students will propose and carry out projects in the class related to a theme. The theme of the projects this semester will be getting the robot to perform tasks to aid in dog-sitting. Pets are an important human companion, we don't like leaving them in a kennel while we are away, and it is hard to find someone you trust to watch your pet at your home while you are away. Also, it's a damn hard thing for a robot to do!
During this second year of the beta program, MIT's goal is integration. The key objective is to be able to look for objects that are out of sight, including moving objects out of the way and opening doors.
This will require the team to integrate their hierarchical task-level planner, which plans in belief space, with their state estimation algorithm, visibility modeling, RRT* motion planner and object localization system to demonstrate planning involving information gathering.
In the upcoming academic year, Stanford will use the PR2 to research methods to increase the productivity of robot teleoperators. They will investigate interaction modalities and user interfaces that combine autonomous execution of high-performing subsystems (e.g., robotic navigation) with human supervision of subsystems with lower success rates (e.g., correcting automatically-generated "garbage" or "not garbage" labels of point cloud clusters in a clean-up task). They anticipate that such interfaces will allow temporal, as well as spatial, separation between the teleoperator and the robot, with potential to dramatically increase teleoperator productivity on tasks currently too difficult to fully automate.
With very robust results in place for folding of towels and sorting of socks, and promising results for folding of t-shirts, pants and sweaters, UC Berkeley will continue to focus on enabling the PR2 to perform the entire laundry task, from a basket with dirty laundry, to washing, drying, folding or hanging, and putting the articles away. UC Berkeley will also continue to work on (rigid) object instance detection, and investigate push-grasps under uncertainty.
Researchers at the GRASP Lab at Penn recently added two microphone "ears" to their PR2 and posted his methods on the hardware mods list. They are now working on various ways to use audio input to enable Graspy to do interesting things. One thrust is to adapt work they have been doing for the DARPA ARM-S project to work on the PR2. The team at Penn has written ROAR, the ROS Opensource Audio Recognizer. ROAR enables the user to easily train a one-class SVM to recognize a certain important sound that might intentionally or unintentionally arise during execution of a certain action, such as a handheld drill turning on or an object being knocked over. Penn is also currently working on a demo that will make the PR2 move in interesting ways ("dance") when you play various musical instruments. Second, we are doing work on physical Human Robot Interaction, building on PR2-props code, which enables the PR2 to give high-fives and fist bumps. Other researchers at Penn are both working on new methods for teleoperating mobile manipulator robots. They have code for providing quality vibrotactile feedback from the accelerometer in the robot's gripper, and are looking at various methods of measuring human arm movement and mapping it naturally to the robot.
The JSK lab at the University of Tokyo has been using the PR2 robot to buy sandwiches at a local restaurant and deliver documents across offices. The technical issues they have been tackling are inter-floor navigation, on-site action learning, high level task planning and compiling, iPad interfaces, and knowledge database integration. These efforts are getting JSK one step closer to a real robot service application that can be used every day. They have already been teaching a class on ROS, OpenRTM, OpenHRP, and OpenRAVE, which raised a lot of awareness of the PR2 Beta Program throughout the University of Tokyo. In the second semester, the JSK lab will tackle the difficulties in getting the PR2 and a humanoid robot to cooperate together for a household task.
The Cognitive Robotics Group at Ulster's plans for the coming year are mainly in support of research related to the IM-CLeVer European FP7 project. The acronym stands for Intrinsically Motivated Cumulative Learning Versatile Robots.
More specifically, the IM-CLeVeR project aims at designing robots that cumulatively learn new efficient skills through autonomous development based on intrinsic motivations and reuse such skills for accomplishing multiple, complex, and externally-assigned tasks. In the attached image the robot was engaged in a task of cumulatively learning the appearance of objects placed on a table. In the next term, they plan to move forward in the direction of skills building, by having the PR2 solve complex problems using either skills it is provided with, or new skills that it will learn "on-demand".
Crowdsourcing provides a convenient and increasingly popular method for gathering large amounts of data and annotations. Amazon's Mechanical Turk and CrowdFlower, games such as the ESP Game, and requests for free annotation help such as LabelMe are just a few examples of crowdsourcing efforts. These attempts have taught us many lessons and brought up yet more questions. How can we most effectively elicit the information we need from a distant and potentially anonymous workforce? What kind of workforce is required for different tasks such as user studies and data set labeling? How can we train and evaluate workers?
The 2012 AAAI Spring Symposium on Wisdom of the Crowd will bring together researchers from robotics, user interfaces, games, computer vision, and other disciplines exploring the core scientific research challenges of crowdsourcing. This symposium will seek to facilitate interaction among researchers and work toward formulating a set of guidelines for future crowdsourcing endeavors.
The symposium will be held at Stanford University, March 26-28, 2012.
For more information, including the symposium format and a list of topics, please see the symposium website.
Important Dates & Submission Information
- October 7, 2011 - Submissions due
- November 4, 2011 - Acceptance notification
- January 20, 2012 - Camera-ready submission
- March 26-28, 2012 - Symposium
We invite contributions in the form of full papers (6 pages) and extended abstracts (2 pages).
Additional information is available on the main AAAI Spring Symposium website.