Multimodal HRI with Remote and Head-worn Eye Trackers

HRI with NAO robot

Multimodal Human-robot Interaction setup with NAO

Gaze is known to be a dominant modality for conveying spatial information, and it has been used for grounding in human-robot dialogues. In this work, we present the prototype of a gaze-supported multi-modal dialogue system that enhances two core tasks in human-robot collaboration:

Continue reading

Evaluation of Mobile Eyetracking as Input Modality for Multitouch Surfaces

Eyetracking as an Input Modality

Multi-touch surfaces enable highly interactive and intuitive applications. Nevertheless large devices are also constrained. It’s possible that users cannot reach every part of the display without walking around or leaning on the surface. To compensate this restriction, I present a method to use mobile eyetracking as an additional input modality. In particular I propose an approach relying on marker-based display recognition and homogeneous transformations. In a user study I evaluated the implementation in terms of accuracy. As result I extracted some design guidelines for building interfaces and considered how to solve limitations of the proposed system.

Continue reading