Four More PR2s Out The Door

LogosThe list of PR2 owners gets longer and more impressive every day. PR2 continues to push the frontiers and is now on it's way to yet another continent. The University of Technology, Sydney (UTS) is now the proud owner of a PR2. Professor Mary-Anne Williams and her team have already spent a few weeks at Willow Garage working on the robot and preparing for the trip back to New South Wales, Australia. Professor Williams directs the Magic Lab at UTS and Associate Dean (Research and Development) in the Faculty of Engineering and IT. The UTS PR2 will be socializing with people and connecting with smart technologies in the new Faculty of Engineering and IT Building in downtown Sydney.

At Carnegie Mellon University, Search-based Planning Lab (SBPL) led by Maxim (Max) Likhachev also has his hands on a new PR2. The group has already worked with the PR2 and ROS when Max was at the GRASP lab of University of Pennsylvania. Max's group will be using the PR2 to continue their research on real-time decision-making and motion planning for robots working in complex environments.

Cornell University has a new PR2 named Kodiak, which is joining the Personal Robotics Lab. Assistant Professor Ashutosh Saxena's research focuses on the ability of robots to operate autonomously in unstructured human environments. Professor Saxena will be providing PR2 with the basic skills of recognizing human activities, scene understanding, grasping and placing objects, and more. The PR2 will then be used in initiatives such as arranging a disorganized house, find and fetch items on request, putting items in a fridge, and more.

Lastly, another PR2 now calls Germany its home. (There is already one in Freiburg and another at TUM.) This time under the direction of Professor Jianwei Zhang, Director of the Department of Informatics at the Institute of TAMS
 (Technical Aspects of Multimodal Systems)
 at the University of Hamburg. The goal at TAMS is to develop methods and implement integrated real-time systems for acquiring, processing and applying information from multiple channels like robotic vision, speech and sound, touch through action, and more.

Stay tuned for more information on these initiatives in the future.