SmarTeeth: Augmenting Manual Toothbrushing with In-ear Microphones

要旨

Improper toothbrushing practices persist as a primary cause of oral health issues such as tooth decay and gum disease. Despite the availability of high-end electric toothbrushes that offer some guidance, manual toothbrushes remain widely used due to their simplicity and convenience. We present SmarTeeth, an earable-based toothbrushing monitoring system designed to augment manual toothbrushing with functionalities typically offered only by high-end electric toothbrushes, such as brushing surface tracking. The underlying idea of SmarTeeth is to leverage in-ear microphones on earphones to capture toothbrushing sounds transmitted through the oral cavity to ear canals through facial bones and tissues. The distinct propagation paths of brushing sounds from various dental locations to each ear canal provide the foundational basis for our methods to accurately identify different brushing locations. By extracting customized features from these sounds, we can detect brushing locations using a deep-learning model. With only one registration session (~2 mins) for a new user, the average accuracy is 92.7% for detecting six regions and 75.6% for sixteen tooth surfaces. With three registration sessions (~6 mins), the performance can be boosted to 98.8% and 90.3% for six-region and sixteen-surface tracking, respectively. A key advantage of using earphones for monitoring is that they provide natural auditory feedback to alert users when they are overbrushing or underbrushing. Comprehensive evaluation validates the effectiveness of SmarTeeth under various conditions (different users, brushes, orders, noise, etc.), and the feedback from the user study (N=13) indicates that users found the system highly useful (6.0/7.0) and reported a low workload (2.5/7.0) while using it. Our findings suggest that SmarTeeth could offer a scalable and effective solution to improve oral health globally by providing manual toothbrush users with advanced brushing monitoring capabilities.

著者
Qiang Yang
University of Cambridge, Cambridge, United Kingdom
Yang Liu
University of Cambridge, Cambridge, United Kingdom
Jake Stuchbury-Wass
University of Cambridge, Cambridge, United Kingdom
Kayla-Jade Butkow
University of Cambridge, Cambridge, United Kingdom
Emeli Panariti
King’s College London, London, London, United Kingdom
Dong Ma
Singapore Management University, Singapore, Singapore
Cecilia Mascolo
University of Cambridge, Cambridge, United Kingdom
DOI

10.1145/3706598.3713893

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713893

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Earable and Hearable

Annex Hall F206
5 件の発表
2025-04-28 20:10:00
2025-04-28 21:40:00
日本語まとめ
読み込み中…