Human-Robot Interaction

Personal robotics requires a deeper understanding of how and why people and robots can most effectively interact with one another to reach a goal. Our Human-Robot Interaction (HRI) research draws from Human-Computer Interaction (HCI), Social and Cognitive Psychology, and the Cognitive Sciences to inform the theories and designs of how people control robots (teleoperation), interact through robots (telepresence), and interact with robots (HRI).

One research area of interest is how the form and behaviors of robots influence human-robot interactions. By using the PR2 and other robotic hardware platforms, we aim to generate a more generalizable understanding of how the physical form of a robot influences how people respond to the robot. For example, how does a robot's pose affect the space people maintain between themselves and the robot? By working in collaboration with animators, sound designers, and others in the arts, we are also looking into effective ways of improving the communicative expressivity of robots, enabling richer communication between people and robots. In this line of work, we are investigating what other interaction models best inform the design of personal robot behaviors (e.g., various types of human-human, human-animal, and human-agent interactions).

Another research area of interest is how people's perceptions of robots influence HRI effectiveness. We are investigating issues of perceived capabilities/competence, agency, autonomy, and safety in terms of how these perceptions influence both behavioral task effectiveness and subsequent attitudes toward robots. For example: How does setting expectations about robots' capabilities influence people's immediate and subsequent beliefs about personal robots? How can a robot's design best convey its abilities and inabilities to people? We are also working toward a better understanding of what people reflectively believe about robots as well as how they respond to robots in-the-moment.

Finally, we are focused on bridging the gap between robotics systems research and HRI research. We approach this by grounding our research in existing or near-term robotic systems, and by taking on research problems that will ultimately improve the state of the art in robotics as well as contribute to a better understanding of HRI. For example: How can robots most effectively learn from human demonstrations? How could navigational planning efficiency be improved through pedestrian-like robot behaviors? What user interface methods (e.g., one-to-one mapped teleoperation, high-level commands, a mixture) and modalities (e.g., graphical, auditory, haptic) are most effective for a given HRI situation?

Relevant Blog Posts:

Publications

2009
Toward a Science of Robotics: Goals and Standards for Experimental Research Takayama, Leila Robotics: Science and Systems (RSS) Workshop on Good Experimental Methodology in Robotics, 2009, Seattle, WA, (2009)  Download: TakayamaRSSworkshop.pdf (112.42 KB)
I Am My Robot: The Impact of Robot-Building and Robot Form on Operators Groom, Victoria., Takayama, Leila., and Nass, Clifford Proc. of Human Robot Interaction (HRI), San Diego, CA, p.31–36, (2009)  Download: HRI 2009 - I Am My Robot.pdf (380.79 KB)
Making Sense of Agentic Objects and Teleoperation: In-the-moment and Reflective Perspectives Takayama, Leila Late Breaking Results of Human Robot Interaction (HRI), San Diego, CA, p.239–240, (2009)  Download: HRI 2009 - Agentic Objects.pdf (400.61 KB)