Gaze is known to be a dominant modality for conveying spatial information, and it has been used for grounding in human-robot dialogues. In this work, we present the prototype of a gaze-supported multi-modal dialogue system that enhances two core tasks in human-robot collaboration:
Gaze-guided Object Classification
Recently, we published a prototype for gaze-guided object classification at UbiComp conference 2016. This topic also raised interest of Pupil Labs, the manufacturer of the applied eye tracking device.
WaterCoaster: A Device to Encourage People in a Playful Fashion to Reach Their Daily Water Intake Level
The WaterCoaster started as a seminar project (Gamified Life) comprising the design of a hardware prototype and a mobile app measuring the water intake of humans. Applying gamification elements, we wanted to persuade the user to drink more frequently and to drink a healthier amount of water during work time. We published the results as late breaking work at CHI 2016
Master Thesis: Gaze Estimation Error in Mobile Eye Tracking
Computational Modelling and Prediction of Gaze Estimation Error for Head-mounted Eye Trackers
The gaze estimation error is inherent in head-mounted eye trackers and seriously impacts performance, usability, and user experience of gaze-based interfaces. Particularly in mobile settings, this error varies constantly as users move in front and look at different parts of a display. We envision a new class of gaze-based interfaces that are aware of the gaze estimation error and adapt to it in real time. As a first step towards this vision, we introduce an error model that is able to predict the gaze estimation error. Our method covers major building blocks of mobile gaze estimation, specifically mapping of pupil positions to scene camera coordinates, marker-based display detection, and mapping of gaze from scene camera to on-screen coordinates. We develop our model through a series of principled measurements of a state-of-the-art head-mounted eye tracker. Continue reading
A Seminar on Human-Robot Interaction
Aero One – Final Prototype 
This post presents images and first impressions on the final prototype of the Aero One portable loudspeaker. The audio quality is adequate for the size and cost of the device. The low frequencies are a bit quiet, probably due to the chosen drivers and the low-power amplifier. On the other side, an approximated runtime of around 6h with high volume and a considerably longer runtimes at room volume are very appealing.
Aero One – Decisions & Prototype Assembly 
In my previous post I described my ideas about a lightweight portable audio system, the Aero One. Most open questions are solved now, e.g., concerning the driver and some new issues that arose during the prototype assembly. Following, you can find an overview outlining my decisions to these points:
- The Visaton FR 10 WP will be used as driver due to it’s smaller form factor.
- The current revision of Aero One won’t support Bluetooth. There was no module with a proper price that justified this feature.
- Regarding volume control – a stereo potentiometer will be used.
- Integrated charging capability using a solar cell is dropped, because available modules provide not enough power (1W).
- The housing will be based on a piece of massive cardboard pipe: the drivers will be mounted at each end.
- The user interface will be divided in two parts: On/off-switch & volume control on the frontside; micro-USB & audio jack on backside.
Aero One – Planning and First Steps 
Lately – as days grew longer – I thought it was time to start a new project, Aero One. The keyword stands for a new portable audio system similar to The Bee, but smaller and lightweight. The weight and the dimensions of The Bee did not allow to carry it on a summer day to a park. A side effect, when not using The Bee was, that 15€ mini speakers came into action frequently. Indeed, they are better than raw smartphone sound, but compared to any loudspeaker with at least a small ambition for high fidelity sound… that’s not comparable. More expensive mobile products have not been an alternative for me, because 1) I rather like to build such stuff on my own and 2) I don’t like hyped music pills, where most of the price is due to the brand.
Evaluation of Mobile Eyetracking as Input Modality for Multitouch Surfaces
Eyetracking as an Input Modality
Multi-touch surfaces enable highly interactive and intuitive applications. Nevertheless large devices are also constrained. It’s possible that users cannot reach every part of the display without walking around or leaning on the surface. To compensate this restriction, I present a method to use mobile eyetracking as an additional input modality. In particular I propose an approach relying on marker-based display recognition and homogeneous transformations. In a user study I evaluated the implementation in terms of accuracy. As result I extracted some design guidelines for building interfaces and considered how to solve limitations of the proposed system.
Raytracer | Computer Graphics
During my studies in winter term 2011/2012 I attended the course computer graphics at Saarland University. It provided much theoretical knowledge about rendering, especially for raytracing. Aside every student had to consolidate his knowledge by implementing essential parts of a raytracer in C++. Finally there was a rendering competition. The main task was to implement certain features, such as procedural shading or photon mapping. Additionally one had to design a 3D scene and a corresponding website emphasizing these features. You can see my final rendering below.
You can find some more submissions on the lecture page.