Earable and Hearable

会議の名前
CHI 2025
EchoBreath: Continuous Respiratory Behavior Recognition in the Wild via Acoustic Sensing on Smart Glasses
要旨

Monitoring the occurrence count of abnormal respiratory symptoms helps provide critical support for respiratory health. While this is necessary, there is still a lack of an unobtrusive and reliable way that can be effectively used in real-world settings. In this paper, we present EchoBreath, a passive and active acoustic combined sensing system for abnormal respiratory symptoms monitoring. EchoBreath novelly uses the speaker and microphone under the frame of the glasses to emit ultrasonic waves and capture both passive sounds and echo profiles, which can effectively distinguish between subject-aware behaviors and background noise. Furthermore, A lightweight neural network with the 'Null' class and open-set filtering mechanisms substantially improves real-world applicability by eliminating unrelated activity. Our experiments, involving 25 participants, demonstrate that EchoBreath can recognize 6 typical respiratory symptoms in a laboratory setting with an accuracy of 93.1%. Additionally, an in-the-semi-wild study with 10 participants further validates that EchoBreath can continuously monitor respiratory abnormalities under real-world conditions. We believe that EchoBreath can serve as an unobtrusive and reliable way to monitor abnormal respiratory symptoms.

受賞
Honorable Mention
著者
Kaiyi Guo
shanghai jiao tong university, shanghai, China
Qian Zhang
Shanghai Jiao Tong University, Shanghai, China
Dong Wang
Shanghai Jiao Tong University, Shanghai, China
DOI

10.1145/3706598.3714171

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714171

動画
Effects of Acoustic Transparency of Wearable Audio Devices on Audio AR
要旨

The provision of audio augmented reality (AAR) experiences is becoming more widespread. In this study, to investigate the influence of device design on AAR experience from the perspective of acoustic transparency, physical and subjective evaluations were conducted using five devices with different shapes and transparency modes. In the subjective evaluation, perceived transparency, impressions of real-world sound, and subjective impressions of AAR experience when wearing each device were evaluated for two distinct content types. We found that device design can potentially influence impressions of real-world sound, such as auditory source width, listener envelopment and punch, and subjective impressions during AAR experience. Devices with high transparency were more likely to draw attention to real-world sounds when users were experiencing AAR, and the experience was evaluated as enjoyable and natural. Two demonstration experiments showed that adding virtual sounds by open-ear earphones to real contents can provide acoustic effects such as distance enhancement.

著者
Yuki Watanabe
Nippon Telegraph and Telephone Corporation, Musashino, Tokyo, Japan
Hironobu Chiba
Nippon Telegraph and Telephone Corporation, Musashino, Tokyo, Japan
Kenichi Noguchi
Nippon Telegraph and Telephone Corporation, Musashino, Tokyo, Japan
Hiroaki Itou
Nippon Telegraph and Telephone Corporation, Musashino, Tokyo, Japan
Tatsuya Kako
Nippon Telegraph and Telephone Corporation, Musashino, Tokyo, Japan
DOI

10.1145/3706598.3713907

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713907

動画
BudsID: Mobile-Ready and Expressive Finger Identification Input for Earbuds
要旨

Wireless earbuds are an appealing platform for wearable computing on-the-go. However, their small size and out-of-view location mean they support limited different inputs. We propose finger identification input on earbuds as a novel technique to resolve these problems. This technique involves associating touches by different fingers with different responses. To enable it on earbuds, we adapted prior work on smartwatches to develop a wireless earbud featuring a magnetometer that detects fields from a magnetic ring. A first study reveals participants achieve rapid, precise earbud touches with different fingers, even while mobile (time: 0.98s, errors: 5.6%). Furthermore, touching fingers can be accurately classified (96.9%). A second study shows strong performance with a more expressive technique involving multi-finger double-taps (inter-touch time: 0.39s, errors: 2.8%) while maintaining high accuracy (94.7%). We close by exploring and evaluating the design of earbud finger identification applications and demonstrating the feasibility of our system on low-resource devices.

著者
Jiwan Kim
KAIST, Daejeon, Korea, Republic of
Mingyu Han
UNIST, Ulsan, Korea, Republic of
Ian Oakley
KAIST, Daejeon, Korea, Republic of
DOI

10.1145/3706598.3714133

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714133

動画
FlexEar-Tips: Shape-Adjustable Ear Tips Using Pressure Control
要旨

We introduce FlexEar-Tips, a dynamic ear tip system designed for the next-generation hearables. The ear tips are controlled by an air pump and solenoid valves, enabling size adjustments for comfort and functionality. FlexEar-Tips includes an air pressure sensor to monitor ear tip size, allowing it to adapt to environmental conditions and user needs. In the evaluation, we conducted a preliminary investigation of the size control accuracy and the minimum amount of variability of haptic perception in the user's ear. We then evaluated the user's ability to identify patterns in the haptic notification system, the impact on the music listening experience, the relationship between the size of the ear tips and the sound localization ability, and the impact on the reduction of humidity in the ear using a model. We proposed new interaction modalities for adaptive hearables and discussed health monitoring, immersive auditory experiences, haptics notifications, biofeedback, and sensing.

著者
Takashi Amesaka
Keio University, Yokohama, Japan
Takumi Yamamoto
Keio University, Yokohama, Japan
Hiroki Watanabe
Future University Hakodate, Hakodate, Japan
Buntarou Shizuki
University of Tsukuba, Tsukuba, Ibaraki, Japan
Yuta Sugiura
Keio University, Yokohama, Japan
DOI

10.1145/3706598.3714177

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714177

動画
SmarTeeth: Augmenting Manual Toothbrushing with In-ear Microphones
要旨

Improper toothbrushing practices persist as a primary cause of oral health issues such as tooth decay and gum disease. Despite the availability of high-end electric toothbrushes that offer some guidance, manual toothbrushes remain widely used due to their simplicity and convenience. We present SmarTeeth, an earable-based toothbrushing monitoring system designed to augment manual toothbrushing with functionalities typically offered only by high-end electric toothbrushes, such as brushing surface tracking. The underlying idea of SmarTeeth is to leverage in-ear microphones on earphones to capture toothbrushing sounds transmitted through the oral cavity to ear canals through facial bones and tissues. The distinct propagation paths of brushing sounds from various dental locations to each ear canal provide the foundational basis for our methods to accurately identify different brushing locations. By extracting customized features from these sounds, we can detect brushing locations using a deep-learning model. With only one registration session (~2 mins) for a new user, the average accuracy is 92.7% for detecting six regions and 75.6% for sixteen tooth surfaces. With three registration sessions (~6 mins), the performance can be boosted to 98.8% and 90.3% for six-region and sixteen-surface tracking, respectively. A key advantage of using earphones for monitoring is that they provide natural auditory feedback to alert users when they are overbrushing or underbrushing. Comprehensive evaluation validates the effectiveness of SmarTeeth under various conditions (different users, brushes, orders, noise, etc.), and the feedback from the user study (N=13) indicates that users found the system highly useful (6.0/7.0) and reported a low workload (2.5/7.0) while using it. Our findings suggest that SmarTeeth could offer a scalable and effective solution to improve oral health globally by providing manual toothbrush users with advanced brushing monitoring capabilities.

著者
Qiang Yang
University of Cambridge, Cambridge, United Kingdom
Yang Liu
University of Cambridge, Cambridge, United Kingdom
Jake Stuchbury-Wass
University of Cambridge, Cambridge, United Kingdom
Kayla-Jade Butkow
University of Cambridge, Cambridge, United Kingdom
Emeli Panariti
King’s College London, London, London, United Kingdom
Dong Ma
Singapore Management University, Singapore, Singapore
Cecilia Mascolo
University of Cambridge, Cambridge, United Kingdom
DOI

10.1145/3706598.3713893

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713893

動画