LipLearner: Customizable Silent Speech Interactions on Mobile Devices

要旨

Silent speech interface is a promising technology that enables private communications in natural language. However, previous approaches only support a small and inflexible vocabulary, which leads to limited expressiveness. We leverage contrastive learning to learn efficient lipreading representations, enabling few-shot command customization with minimal user effort. Our model exhibits high robustness to different lighting, posture, and gesture conditions on an in-the-wild dataset. For 25-command classification, an F1-score of 0.8947 is achievable only using one shot, and its performance can be further boosted by adaptively learning from more data. This generalizability allowed us to develop a mobile silent speech interface empowered with on-device fine-tuning and visual keyword spotting. A user study demonstrated that with LipLearner, users could define their own commands with high reliability guaranteed by an online incremental learning scheme. Subjective feedback indicated that our system provides essential functionalities for customizable silent speech interactions with high usability and learnability.

受賞
Best Paper
著者
Zixiong Su
The University of Tokyo, Tokyo, Japan
Shitao Fang
The University of Tokyo, Tokyo, Japan
Jun Rekimoto
The University of Tokyo, Tokyo, Japan
論文URL

https://doi.org/10.1145/3544548.3581465

動画

会議: CHI 2023

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

セッション: Speech and Remapping Techniques

Hall C
6 件の発表
2023-04-26 20:10:00
2023-04-26 21:35:00