MorsEar: Toward Generalizable Low-Resource Covert Messaging via Earable based Inertial Sensing

要旨

Silent, eyes-free text entry remains challenging when speech and touch are impractical. Prior wearable systems required custom sensors or limited users to a small vocabulary. We present MorsEar, an IMU-only earable framework that maps near-ear micro-gestures such as taps for dot/dash; slide/pull/circle for space/delete/send, into character-level Morse, enabling unrestricted composition with a compact lexicon for lightweight on-device autocorrect. The result is a low-bandwidth, reduced-exposure communication channel that works eyes-free and voice-free in accessibility scenarios, silent zones, and constrained environments. MorsEar infers words using a physics-aware preprocessing stack and a compact CNN, feeding a tempo-adaptive segmentation with rolling buffers; an on-device decoder provides real-time feedback entirely on-phone. In a 24-participant study (with four accessibility users) across Silent, Cafe, and Metro, MorsEar achieved CER 7.3% and WER 12.5% → 7.8% (Autocorrect), with median WPM of 9.3/9.1/5.8, respectively. Similar to other accessibility-oriented encodings such as Braille, Morse requires a brief familiarization period to learn the timing and rhythm of dots and dashes; after which, MorsEar shows that commodity earable IMUs can support discreet, low-exposure text entry that scales beyond discrete commands to language-level interaction.

受賞
Honorable Mention
著者
Garvit Chugh
Indian Institute of Technology, Jodhpur, Rajasthan, India
Indrajeet Ghosh
UMBC, Baltimore, Maryland, United States
Nirmalya Roy
University of Maryland Baltimore County, Baltimore, Maryland, United States
Sandip Chakraborty
IIT Kharagpur, India, Kharagpur, West Bengal, India
Suchetana Chakraborty
Indian Institute of Technology Jodhpur, Jodhpur, Rajasthan, India

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Thermal and Gestural Interaction

P1 - Room 133
7 件の発表
2026-04-15 20:15:00
2026-04-15 21:45:00