EYEditor: Towards On-the-Go Heads-Up Text Editing Using Voice and Manual Input

要旨

On-the-go text-editing is difficult, yet frequently done in everyday lives. Using smartphones for editing text forces users into a heads-down posture which can be undesirable and unsafe. We present EYEditor, a heads-up smartglass-based solution that displays the text on a see-through peripheral display and allows text-editing with voice and manual input. The choices of output modality (visual and/or audio) and content presentation were made after a controlled experiment, which showed that sentence-by-sentence visual-only presentation is best for optimizing users' editing and path-navigation capabilities. A second experiment formally evaluated EYEditor against the standard smartphone-based solution for tasks with varied editing complexities and navigation difficulties. The results showed that EYEditor outperformed smartphones as either the path OR the task became more difficult. Yet, the advantage of EYEditor became less salient when both the editing and navigation was difficult. We discuss trade-offs and insights gained for future heads-up text-editing solutions.

キーワード
Heads-up Interaction
Smart glass
Text editing
Voice Interaction
EYEditor
Wearable Interaction
Mobile Interaction
Re-speaking
Manual-input
著者
Debjyoti Ghosh
National University of Singapore, Singapore, Singapore
Pin Sym Foong
National University of Singapore, Singapore, Singapore
Shengdong Zhao
National University of Singapore, Singapore, Singapore
Can Liu
City University of Hong Kong, Kowloon, China
Nuwan Janaka
National University of Singapore, Singapore, Singapore
Vinitha Erusu
National University of Singapore, Singapore, Singapore
DOI

10.1145/3313831.3376173

論文URL

https://doi.org/10.1145/3313831.3376173

動画

会議: CHI 2020

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2020.acm.org/)

セッション: Text entry

Paper session
306AB
5 件の発表
2020-04-28 18:00:00
2020-04-28 19:15:00
日本語まとめ
読み込み中…