Willow Garage at RSS June 28-July 1

Several us will be at Robotic Science and Systems (RSS) 2009 next week in Seattle. Here's a partial schedule:

Sunday, June 28, 2009

Workshop on Mobile Manipulation in Human Environments

Willow Garage is co-organizing this workshop, which will discuss state-of-the-art research in mobile manipulation. Sachin Chitta will be giving a talk during Session I (9:50-10:10) about our progress with autonomous door opening and plugging.

Workshop on Algorithmic Automation

Gary Bradski will be giving a talk from 11:00-11:25 during Session 2. He will discuss a proposed modular, scalable architecture for recognizing objects and their 6DOF pose

Workshop on Good Experimental Methodology in Robotics

Leila Takayama will be giving a talk from 9:30-9:50. She will be presenting her paper Toward a Science of Robotics: Goals and Standards for Experimental Research.

Workshop on Bridging the Gap Between High-Level Discrete Representations and Low-Level Continuous Behaviors

Bhaskara Marthi will be presenting on "Angelic Hierarchical Planning" from during Session 4 from 5:00-5:20. His talk will discuss the use of "angelic semantics" to integrate planning at various levels of abstraction. He will also present some preliminary results showing how task and robotic motion planning can be combined using this approach.

Monday, June 29, 2009 6:30 – 9:30 Poster Session (Poster 16)

View-Based Maps, Kurt Konolige, Michael Colander, James Bowman, Patrick Mihelich, JD Chen, Pascal Fua, Vincent Lepetit

Abstract: Robotic systems that can create and use visual maps in realtime have obvious advantages in many applications, from automatic driving to mobile manipulation in the home. In this paper we describe a mapping system based on retaining stereo views of the environment that are collected as the robot moves. Connections among the views are formed by consistent geometric matching of their features. Out-of-sequence matching is the key problem: how to find connections from the current view to other corresponding views in the map. Our approach uses a vocabulary tree to propose candidate views, and a strong geometric filter to eliminate false positives essentially, the robot continually re-recognizes where it is. We present experiments showing the utility of the approach on video data, including map building in large indoor and outdoor environments, map building without localization, and re-localization when lost.