A Skin-Stroke Display on the Eye-Ring Through Head-Mounted Displays
説明

We present the Skin-Stroke Display, a system mounted on the lens inside the head-mounted display, which exerts subtle yet recognizable tactile feedback on the eye-ring using a motorized air jet. To inform our design of noticeable air-jet haptic feedback, we conducted a user study to identify absolute detection thresholds. Our results show that the tactile sensation had different sensitivity around the eyes, and we determined a standard intensity (8 mbar) to prevent turbulent airflow blowing into the eyes. In the second study, we asked participants to adjust the intensity around the eye for equal sensation based on standard intensity. Next, we investigated the recognition of point and stroke stimuli with or without inducing cognitive load on eight directions on the eye-ring. Our longStroke stimulus can achieve an accuracy of 82.6% without cognitive load and 80.6% with cognitive load simulated by the Stroop test. Finally, we demonstrate example applications using the skin-stroke display as the off-screen indicator, tactile I/O progress display, and tactile display.

日本語まとめ
読み込み中…
読み込み中…
Soundr: Head Position and Orientation Prediction Using a Microphone Array
説明

Although state-of-the-art smart speakers can hear a user's speech, unlike a human assistant these devices cannot figure out users' verbal references based on their head location and orientation. Soundr presents a novel interaction technique that leverages the built-in microphone array found in most smart speakers to infer the user's spatial location and head orientation using only their voice. With that extra information, Soundr can figure out users references to objects, people, and locations based on the speakers' gaze, and also provide relative directions. To provide training data for our neural network, we collected 751 minutes of data (50x that of the best prior work) from human speakers leveraging a virtual reality headset to accurately provide head tracking ground truth. Our results achieve an average positional error of 0.31m and an orientation angle accuracy of 34.3° for each voice command. A user study to evaluate user preferences for controlling IoT appliances by talking at them found this new approach to be fast and easy to use.

日本語まとめ
読み込み中…
読み込み中…
FitByte: Automatic Diet Monitoring in Unconstrained Situations Using Multimodal Sensing on Eyeglasses
説明

In an attempt to help users reach their health goals and practitioners understand the relationship between diet and disease, researchers have proposed many wearable systems to automatically monitor food consumption. When a person consumes food, he/she brings the food close to their mouth, take a sip or bite and chew, and then swallow.Most diet monitoring approaches focus on one of these aspects of food intake, but this narrow reliance requires high precision and often fails in noisy and unconstrained situations common in a person's daily life. In this paper, we introduce FitByte, a multi-modal sensing approach on a pair of eyeglasses that tracks all phases of food intake. FitByte contains a set of inertial and optical sensors that allow it to reliably detect food intake events in noisy environments. It also has an on-board camera that opportunistically captures visuals of the food as the user consumes it. We evaluated the system in two studies with decreasing environmental constraints with 23 participants. On average, FitByte achieved 89% F1-score in detecting eating and drinking episodes.

日本語まとめ
読み込み中…
読み込み中…
BISHARE: Exploring Bidirectional Interactions Between Smartphones and Head-Mounted Augmented Reality
説明

In pursuit of a future where HMD devices can be used in tandem with smartphones and other smart devices, we present BISHARE, a design space of cross-device interactions between smartphones and ARHMDs. Our design space is unique in that it is bidirectional in nature, as it examines how both the HMD can be used to enhance smartphone tasks, and how the smartphone can be used to enhance HMD tasks. We then present an interactive prototype that enables cross-device interactions across the proposed design space. A 12-participant user study demonstrates the promise of the design space and provides insights, observations, and guidance for the future.

日本語まとめ
読み込み中…
読み込み中…
RunAhead: Exploring Head Scanning based Navigation for Runners
説明

Navigation systems for runners commonly provide turn-by-turn directions via voice and/or map-based visualizations. While voice directions require permanent attention, map-based guidance requires regular consultation. Both disrupt the running activity. To address this, we designed RunAhead, a navigation system using head scanning to query for navigation feedback, and we explored its suitability for runners in an outdoor experiment. In our design, we provide the runner with simple and intuitive navigation feedback on the path s/he is looking at through three different feedback modes: haptic, music and audio cues. In our experiment, we compare the resulting three versions of RunAhead with a baseline voice-based navigation system. We find that demand and error are equivalent across all four conditions. However, the head scanning based haptic and music conditions are preferred over the baseline and these preferences are impacted by runners' habits. With this study we contribute insights for designing navigation support for runners.

日本語まとめ
読み込み中…
読み込み中…