RubySlippers: Supporting Content-based Voice Navigation for How-to Videos

要旨

Directly manipulating the timeline, such as scrubbing for thumbnails, is the standard way of controlling how-to videos. However, when how-to videos involve physical activities, people inconveniently alternate between controlling the video and performing the tasks. Adopting a voice user interface allows people to control the video with voice while performing the tasks with hands. However, naively translating timeline manipulation into voice user interfaces (VUI) results in temporal referencing (e.g. ``rewind 20 seconds''), which requires a different mental model for navigation and thereby limiting users' ability to peek into the content. We present RubySlippers, a system that supports efficient content-based voice navigation through keyword-based queries. Our computational pipeline automatically detects referenceable elements in the video, and finds the video segmentation that minimizes the number of needed navigational commands. Our evaluation (N=12) shows that participants could perform three representative navigation tasks with fewer commands and less frustration using RubySlippers than the conventional voice-enabled video interface.

著者
Minsuk Chang
KAIST, Daejeon, Korea, Republic of
Mina Huh
KAIST, Daejeon, Korea, Republic of
Juho Kim
KAIST, Daejeon, Korea, Republic of
DOI

10.1145/3411764.3445131

論文URL

https://doi.org/10.1145/3411764.3445131

動画

会議: CHI 2021

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2021.acm.org/)

セッション: Engineering Interactive Applications

[B] Paper Room 05, 2021-05-14 01:00:00~2021-05-14 03:00:00 / [C] Paper Room 05, 2021-05-14 09:00:00~2021-05-14 11:00:00 / [A] Paper Room 05, 2021-05-13 17:00:00~2021-05-13 19:00:00
Paper Room 05
14 件の発表
2021-05-14 01:00:00
2021-05-14 03:00:00
日本語まとめ
読み込み中…