Stargazer: An Interactive Camera Robot for Capturing How-To Videos Based on Subtle Instructor Cues

要旨

Live and pre-recorded video tutorials are an effective means for teaching physical skills such as cooking or prototyping electronics. A dedicated cameraperson following an instructor’s activities can improve production quality. However, instructors who do not have access to a cameraperson’s help often have to work within the constraints of static cameras. We present Stargazer, a novel approach for assisting with tutorial content creation with a camera robot that autonomously tracks regions of interest based on instructor actions to capture dynamic shots. Instructors can adjust the camera behaviors of Stargazer with subtle cues, including gestures and speech, allowing them to fluidly integrate camera control commands into instructional activities. Our user study with six instructors, each teaching a distinct skill, showed that participants could create dynamic tutorial videos with a diverse range of subjects, camera framing, and camera angle combinations using Stargazer.

著者
Jiannan Li
University of Toronto, Toronto, Ontario, Canada
Mauricio Sousa
University of Toronto, Toronto, Ontario, Canada
Karthik Mahadevan
University of Toronto, Toronto, Ontario, Canada
Bryan Wang
University of Toronto, Toronto, Ontario, Canada
Paula Akemi. Aoyagui
University of Toronto, Toronto, Ontario, Canada
Nicole Yu
University of Toronto, Toronto, Ontario, Canada
Angela Yang
University of Toronto, Toronto, Ontario, Canada
Ravin Balakrishnan
University of Toronto, Toronto, Ontario, Canada
Anthony Tang
University of Toronto, Toronto, Ontario, Canada
Tovi Grossman
University of Toronto, Toronto, Ontario, Canada
論文URL

https://doi.org/10.1145/3544548.3580896

動画

会議: CHI 2023

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

セッション: Videos

Hall A
6 件の発表
2023-04-27 01:35:00
2023-04-27 03:00:00