RubySlippers: Supporting Content-based Voice Navigation for How-to Videos


Directly manipulating the timeline, such as scrubbing for thumbnails, is the standard way of controlling how-to videos. However, when how-to videos involve physical activities, people inconveniently alternate between controlling the video and performing the tasks. Adopting a voice user interface allows people to control the video with voice while performing the tasks with hands. However, naively translating timeline manipulation into voice user interfaces (VUI) results in temporal referencing (e.g. ``rewind 20 seconds''), which requires a different mental model for navigation and thereby limiting users' ability to peek into the content. We present RubySlippers, a system that supports efficient content-based voice navigation through keyword-based queries. Our computational pipeline automatically detects referenceable elements in the video, and finds the video segmentation that minimizes the number of needed navigational commands. Our evaluation (N=12) shows that participants could perform three representative navigation tasks with fewer commands and less frustration using RubySlippers than the conventional voice-enabled video interface.

Minsuk Chang
KAIST, Daejeon, Korea, Republic of
Mina Huh
KAIST, Daejeon, Korea, Republic of
Juho Kim
KAIST, Daejeon, Korea, Republic of




会議: CHI 2021

The ACM CHI Conference on Human Factors in Computing Systems (

セッション: Engineering Interactive Applications

[B] Paper Room 05, 2021-05-14 01:00:00~2021-05-14 03:00:00 / [C] Paper Room 05, 2021-05-14 09:00:00~2021-05-14 11:00:00 / [A] Paper Room 05, 2021-05-13 17:00:00~2021-05-13 19:00:00
Paper Room 05
14 件の発表
2021-05-14 01:00:00
2021-05-14 03:00:00