Temporal Segmentation of Creative Live Streams

要旨

Many artists broadcast their creative process through live streaming platforms like Twitch and YouTube, and people often watch archives of these broadcasts later for learning and inspiration. Unfortunately, because live stream videos are often multiple hours long and hard to skim and browse, few can leverage the wealth of knowledge hidden in these archives. We present an approach for automatic temporal segmentation of creative live stream videos. Using an audio transcript and a log of software usage, the system segments the video into sections that the artist can optionally label with meaningful titles. We evaluate this approach by gathering feedback from expert streamers and comparing automatic segmentations to those made by viewers. We find that, while there is no one "correct" way to segment a live stream, our automatic method performs similarly to viewers, and streamers find it useful for navigating their streams after making slight adjustments and adding section titles.

キーワード
live streaming
creativity
video segmentation
著者
C. Ailie Fraser
Adobe Research & University of California, San Diego, Seattle, WA, USA
Joy O. Kim
Adobe Research, San Francisco, CA, USA
Hijung Valentina Shin
Adobe Research, Cambridge, MA, USA
Joel Brandt
Adobe Research, Santa Monica, CA, USA
Mira Dontcheva
Adobe Research, Seattle, WA, USA
DOI

10.1145/3313831.3376437

論文URL

https://doi.org/10.1145/3313831.3376437

会議: CHI 2020

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2020.acm.org/)

セッション: Photo & video manipulation

Paper session
312 NI'IHAU
5 件の発表
2020-04-28 23:00:00
2020-04-29 00:15:00
日本語まとめ
読み込み中…