Multimodal HRI with Remote and Head-worn Eye Trackers

HRI with NAO robot

Multimodal Human-robot Interaction setup with NAO

Gaze is known to be a dominant modality for conveying spatial information, and it has been used for grounding in human-robot dialogues. In this work, we present the prototype of a gaze-supported multi-modal dialogue system that enhances two core tasks in human-robot collaboration:

  1. our robot is able to learn new objects and their location from user instructions involving gaze, and
  2. it can instruct the user to move objects and passively track this movement by interpreting the user’s gaze.

We performed a user study to investigate the impact of different eye trackers on user performance. In particular, we compare a head-worn device and an RGB-based remote eye tracker. Our results show that the head-mounted eye tracker outperforms the remote device in terms of task completion time and the required number of utterances due to its higher precision.

Video

Reference

Michael Barz and Peter Poller and Daniel Sonntag: Evaluating Remote and Head-worn Eye Trackers in Multi-modal Speech-based HRI. In: Mutlu, Bilge; Tscheligi, Manfred; Weiss, Astrid; Young, James E. (Ed.): Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, HRI 2017, Vienna, Austria, March 6-9, 2017, pp. 79–80, ACM, 2017.

 

Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

I accept that my given data and my IP address is sent to a server in the USA only for the purpose of spam prevention through the Akismet program.More information on Akismet and GDPR.

This site uses Akismet to reduce spam. Learn how your comment data is processed.