Gaze is known to be a dominant modality for conveying spatial information, and it has been used for grounding in human-robot dialogues. In this work, we present the prototype of a gaze-supported multi-modal dialogue system that enhances two core tasks in human-robot collaboration:
WaterCoaster: A Device to Encourage People in a Playful Fashion to Reach Their Daily Water Intake Level
The WaterCoaster started as a seminar project (Gamified Life) comprising the design of a hardware prototype and a mobile app measuring the water intake of humans. Applying gamification elements, we wanted to persuade the user to drink more frequently and to drink a healthier amount of water during work time. We published the results as late breaking work at CHI 2016
Computational Modelling and Prediction of Gaze Estimation Error for Head-mounted Eye Trackers
The gaze estimation error is inherent in head-mounted eye trackers and seriously impacts performance, usability, and user experience of gaze-based interfaces. Particularly in mobile settings, this error varies constantly as users move in front and look at different parts of a display. We envision a new class of gaze-based interfaces that are aware of the gaze estimation error and adapt to it in real time. As a first step towards this vision, we introduce an error model that is able to predict the gaze estimation error. Our method covers major building blocks of mobile gaze estimation, specifically mapping of pupil positions to scene camera coordinates, marker-based display detection, and mapping of gaze from scene camera to on-screen coordinates. We develop our model through a series of principled measurements of a state-of-the-art head-mounted eye tracker. Continue reading
Eyetracking as an Input Modality
Multi-touch surfaces enable highly interactive and intuitive applications. Nevertheless large devices are also constrained. It’s possible that users cannot reach every part of the display without walking around or leaning on the surface. To compensate this restriction, I present a method to use mobile eyetracking as an additional input modality. In particular I propose an approach relying on marker-based display recognition and homogeneous transformations. In a user study I evaluated the implementation in terms of accuracy. As result I extracted some design guidelines for building interfaces and considered how to solve limitations of the proposed system.
During my studies in winter term 2011/2012 I attended the course computer graphics at Saarland University. It provided much theoretical knowledge about rendering, especially for raytracing. Aside every student had to consolidate his knowledge by implementing essential parts of a raytracer in C++. Finally there was a rendering competition. The main task was to implement certain features, such as procedural shading or photon mapping. Additionally one had to design a 3D scene and a corresponding website emphasizing these features. You can see my final rendering below.
You can find some more submissions on the lecture page.
On 5th of December I handed in my Bachelor Thesis. My topic was Mobile Payment with Smartphones and how one can embed such a system to a market. But what’s it all about?
Many well-known companies such as Google, Microsoft or PayPal deal with Mobile Payment procedures based on Near Field Communication (NFC). Facing the great market for mobile Apps and upcoming solutions, it’s obvious that a trend to NFC-based systems emerges. My work covers this field and extends it by enhancing processes for retail. Further I put emphasize on maintenance of product information, cart management and the checkout process. Thus clients as well as employees shall be unburdened. Accordingly a software solution is introduced, based on the operation system Android and the transmission standard NFC.