Enhancing Mobile Voice Assistants with WorldGaze

要旨

Contemporary voice assistants require that objects of inter-est be specified in spoken commands. Of course, users are often looking directly at the object or place of interest – fine-grained, contextual information that is currently unused. We present WorldGaze, a software-only method for smartphones that provides the real-world gaze location of a user that voice agents can utilize for rapid, natural, and precise interactions. We achieve this by simultaneously opening the front and rear cameras of a smartphone. The front-facing camera is used to track the head in 3D, including estimating its direction vector. As the geometry of the front and back cameras are fixed and known, we can raycast the head vector into the 3D world scene as captured by the rear-facing camera. This allows the user to intuitively define an object or region of interest using their head gaze. We started our investigations with a qualitative exploration of competing methods, before developing a functional, real-time implementation. We conclude with an evaluation that shows WorldGaze can be quick and accurate, opening new multimodal gaze+voice interactions for mobile voice agents.

キーワード
WorldGaze
interaction techniques
mobile interaction
著者
Sven Mayer
Carnegie Mellon University, Pittsburgh, PA, USA
Gierad Laput
Apple Inc. & Carnegie Mellon University, Cupertino, CA, USA
Chris Harrison
Carnegie Mellon University, Pittsburgh, PA, USA
DOI

10.1145/3313831.3376479

論文URL

https://doi.org/10.1145/3313831.3376479

動画

会議: CHI 2020

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2020.acm.org/)

セッション: Look at me

Paper session
311 KAUA'I
5 件の発表
2020-04-27 20:00:00
2020-04-27 21:15:00
日本語まとめ
読み込み中…