Use your head & run

Paper session

会議の名前
CHI 2020
A Skin-Stroke Display on the Eye-Ring Through Head-Mounted Displays
要旨

We present the Skin-Stroke Display, a system mounted on the lens inside the head-mounted display, which exerts subtle yet recognizable tactile feedback on the eye-ring using a motorized air jet. To inform our design of noticeable air-jet haptic feedback, we conducted a user study to identify absolute detection thresholds. Our results show that the tactile sensation had different sensitivity around the eyes, and we determined a standard intensity (8 mbar) to prevent turbulent airflow blowing into the eyes. In the second study, we asked participants to adjust the intensity around the eye for equal sensation based on standard intensity. Next, we investigated the recognition of point and stroke stimuli with or without inducing cognitive load on eight directions on the eye-ring. Our longStroke stimulus can achieve an accuracy of 82.6% without cognitive load and 80.6% with cognitive load simulated by the Stroop test. Finally, we demonstrate example applications using the skin-stroke display as the off-screen indicator, tactile I/O progress display, and tactile display.

受賞
Honorable Mention
キーワード
Skin-Stroke Display
Air Jet
Eye-Ring
Head-Mounted Display
Virtual Reality
Haptics
著者
Wen-Jie Tseng
National Chiao Tung University & Institut Polytechnique de Paris, Hsinchu, Taiwan Roc
Yi-Chen Lee
Institute of Multimedia Engineering, Hsinchu, Taiwan Roc
Roshan L. Peiris
Rochester Institute of Technology, Rochester, NY, USA
Liwei Chan
Computer Science, Hsinchu, Taiwan Roc
DOI

10.1145/3313831.3376700

論文URL

https://doi.org/10.1145/3313831.3376700

動画
Soundr: Head Position and Orientation Prediction Using a Microphone Array
要旨

Although state-of-the-art smart speakers can hear a user's speech, unlike a human assistant these devices cannot figure out users' verbal references based on their head location and orientation. Soundr presents a novel interaction technique that leverages the built-in microphone array found in most smart speakers to infer the user's spatial location and head orientation using only their voice. With that extra information, Soundr can figure out users references to objects, people, and locations based on the speakers' gaze, and also provide relative directions. To provide training data for our neural network, we collected 751 minutes of data (50x that of the best prior work) from human speakers leveraging a virtual reality headset to accurately provide head tracking ground truth. Our results achieve an average positional error of 0.31m and an orientation angle accuracy of 34.3° for each voice command. A user study to evaluate user preferences for controlling IoT appliances by talking at them found this new approach to be fast and easy to use.

キーワード
Smart speakers
internet of Things
machine learning
acoustic source localization
著者
Jackie (Junrui) Yang
Stanford University, Stanford, CA, USA
Gaurab Banerjee
Stanford University, Stanford, CA, USA
Vishesh Gupta
Stanford University, Stanford, CA, USA
Monica S. Lam
Stanford University, Stanford, CA, USA
James A. Landay
Stanford University, Stanford, CA, USA
DOI

10.1145/3313831.3376427

論文URL

https://doi.org/10.1145/3313831.3376427

動画
FitByte: Automatic Diet Monitoring in Unconstrained Situations Using Multimodal Sensing on Eyeglasses
要旨

In an attempt to help users reach their health goals and practitioners understand the relationship between diet and disease, researchers have proposed many wearable systems to automatically monitor food consumption. When a person consumes food, he/she brings the food close to their mouth, take a sip or bite and chew, and then swallow.Most diet monitoring approaches focus on one of these aspects of food intake, but this narrow reliance requires high precision and often fails in noisy and unconstrained situations common in a person's daily life. In this paper, we introduce FitByte, a multi-modal sensing approach on a pair of eyeglasses that tracks all phases of food intake. FitByte contains a set of inertial and optical sensors that allow it to reliably detect food intake events in noisy environments. It also has an on-board camera that opportunistically captures visuals of the food as the user consumes it. We evaluated the system in two studies with decreasing environmental constraints with 23 participants. On average, FitByte achieved 89% F1-score in detecting eating and drinking episodes.

キーワード
Eating Detection
Drinking Detection
Diet Monitoring
Health Sensing
Activity Recognition
Wearable Computing
Earables
Ubiquitous Computing
著者
Abdelkareem Bedri
Carnegie Mellon University, Pittsburgh, PA, USA
Diana Li
Carnegie Mellon University, Pittsburgh, PA, USA
Rushil Khurana
Carnegie Mellon University, Pittsburgh, PA, USA
Kunal Bhuwalka
Carnegie Mellon University, Pittsburgh, PA, USA
Mayank Goel
Carnegie Mellon University, Pittsburgh, PA, USA
DOI

10.1145/3313831.3376869

論文URL

https://doi.org/10.1145/3313831.3376869

動画
BISHARE: Exploring Bidirectional Interactions Between Smartphones and Head-Mounted Augmented Reality
要旨

In pursuit of a future where HMD devices can be used in tandem with smartphones and other smart devices, we present BISHARE, a design space of cross-device interactions between smartphones and ARHMDs. Our design space is unique in that it is bidirectional in nature, as it examines how both the HMD can be used to enhance smartphone tasks, and how the smartphone can be used to enhance HMD tasks. We then present an interactive prototype that enables cross-device interactions across the proposed design space. A 12-participant user study demonstrates the promise of the design space and provides insights, observations, and guidance for the future.

キーワード
Augmented Reality
Smartphones
Cross-Device Computing
Mixed-Reality Computing
著者
Fengyuan Zhu
University of Toronto, Toronto, ON, Canada
Tovi Grossman
University of Toronto, Toronto, ON, Canada
DOI

10.1145/3313831.3376233

論文URL

https://doi.org/10.1145/3313831.3376233

動画
RunAhead: Exploring Head Scanning based Navigation for Runners
要旨

Navigation systems for runners commonly provide turn-by-turn directions via voice and/or map-based visualizations. While voice directions require permanent attention, map-based guidance requires regular consultation. Both disrupt the running activity. To address this, we designed RunAhead, a navigation system using head scanning to query for navigation feedback, and we explored its suitability for runners in an outdoor experiment. In our design, we provide the runner with simple and intuitive navigation feedback on the path s/he is looking at through three different feedback modes: haptic, music and audio cues. In our experiment, we compare the resulting three versions of RunAhead with a baseline voice-based navigation system. We find that demand and error are equivalent across all four conditions. However, the head scanning based haptic and music conditions are preferred over the baseline and these preferences are impacted by runners' habits. With this study we contribute insights for designing navigation support for runners.

キーワード
Navigation for Running
Head Scanning
Audio Feedback
Haptic Feedback
著者
Danilo Gallo
Naver Labs Europe, Grenoble, France
Shreepriya Shreepriya
Naver Labs Europe, Grenoble, France
Jutta Willamowski
Naver Labs Europe, Grenoble, France
DOI

10.1145/3313831.3376828

論文URL

https://doi.org/10.1145/3313831.3376828