Silent, eyes-free text entry remains challenging when speech and touch are impractical. Prior wearable systems required custom sensors or limited users to a small vocabulary. We present MorsEar, an IMU-only earable framework that maps near-ear micro-gestures such as taps for dot/dash; slide/pull/circle for space/delete/send, into character-level Morse, enabling unrestricted composition with a compact lexicon for lightweight on-device autocorrect. The result is a low-bandwidth, reduced-exposure communication channel that works eyes-free and voice-free in accessibility scenarios, silent zones, and constrained environments. MorsEar infers words using a physics-aware preprocessing stack and a compact CNN, feeding a tempo-adaptive segmentation with rolling buffers; an on-device decoder provides real-time feedback entirely on-phone. In a 24-participant study (with four accessibility users) across Silent, Cafe, and Metro, MorsEar achieved CER 7.3% and WER 12.5% → 7.8% (Autocorrect), with median WPM of 9.3/9.1/5.8, respectively. Similar to other accessibility-oriented encodings such as Braille, Morse requires a brief familiarization period to learn the timing and rhythm of dots and dashes; after which, MorsEar shows that commodity earable IMUs can support discreet, low-exposure text entry that scales beyond discrete commands to language-level interaction.
ACM CHI Conference on Human Factors in Computing Systems