Video Authoring

会議の名前
CHI 2022
Enhanced Videogame Livestreaming by Reconstructing an Interactive 3D Game View for Spectators
要旨

Many videogame players livestream their gameplay so remote spectators can watch for enjoyment, fandom, and to learn strategies and techniques. Current approaches capture the player's rendered RGB view of the game, and then encode and stream it as a 2D live video feed. We extend this basic concept by also capturing the depth buffer, camera pose, and projection matrix from the rendering pipeline of the videogame and package them all within a MPEG-4 media container. Combining these additional data streams with the RGB view, our system builds a real-time, cumulative 3D representation of the live game environment for spectators. This enables each spectator to individually control a personal game view in 3D. This means they can watch the game from multiple perspectives, enabling a new kind of videogame spectatorship experience.

著者
Jeremy Hartmann
University of Waterloo, Waterloo, Ontario, Canada
Daniel Vogel
University of Waterloo, Waterloo, Ontario, Canada
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3517521

動画
CatchLive: Real-time Summarization of Live Streams with Stream Content and Interaction Data
要旨

Live streams usually last several hours with many viewers joining in the middle. Viewers who join in the middle often want to understand what has happened in the stream. However, catching up with the earlier parts is challenging because it is difficult to know which parts are important in the long, unedited stream while also keeping up with the ongoing stream. We present CatchLive, a system that provides a real-time summary of ongoing live streams by utilizing both the stream content and user interaction data. CatchLive provides viewers with an overview of the stream along with summaries of highlight moments with multiple levels of detail in a readable format. Results from deployments of three streams with 67 viewers show that CatchLive helps viewers grasp the overview of the stream, identify important moments, and stay engaged. Our findings provide insights into designing summarizations of live streams reflecting their characteristics.

著者
Saelyne Yang
School of Computing, KAIST, Daejeon, Korea, Republic of
Jisu Yim
KAIST, Daejeon, Korea, Republic of
Juho Kim
KAIST, Daejeon, Korea, Republic of
Hijung Valentina Shin
Adobe Research, Cambridge, Massachusetts, United States
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3517461

動画
FitVid: Responsive and Flexible Video Content Adaptation
要旨

Mobile video-based learning attracts many learners with its mobility and ease of access. However, most lectures are designed for desktops. Our formative study reveals mobile learners' two major needs: more readable content and customizable video design. To support mobile-optimized learning, we present FitVid, a system that provides responsive and customizable video content. Our system consists of (1) an adaptation pipeline that reverse-engineers pixels to retrieve design elements (e.g., text, images) from videos, leveraging deep learning with a custom dataset, which powers (2) a UI that enables resizing, repositioning, and toggling in-video elements. The content adaptation improves the guideline compliance rate by 24% and 8% for word count and font size. The content evaluation study (n=198) shows that the adaptation significantly increases readability and user satisfaction. The user study (n=31) indicates that FitVid significantly improves learning experience, interactivity, and concentration. We discuss design implications for responsive and customizable video adaptation.

著者
Jeongyeon Kim
KAIST, Daejeon, Korea, Republic of
Yubin Choi
KAIST, Daejeon, Korea, Republic of
Minsuk Kahng
Oregon State University, Corvallis, Oregon, United States
Juho Kim
KAIST, Daejeon, Korea, Republic of
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3501948

動画
Katika: An End-to-End System for Authoring Amateur Explainer Motion Graphics Videos
要旨

Explainer motion graphics videos that use a combination of graphical elements and movement to convey a visual message are becoming increasingly popular among amateur creators in different domains. But, to author motion graphics videos, amateurs either have to face a steep learning curve with professional design tools or struggle with re-purposing slide-sharing tools that are easier to access but have limited animation capabilities. To simplify the process of motion graphics authoring, we present the design and implementation of Katika, an end-to-end system for creating shots based on a script, adding artworks and animation from a crowdsourced library, and editing the video using semi-automated transitions. Our observational study illustrates that participants (N=11) enjoyed using Katika and, within a one-hour session, managed to create an explainer motion graphics video. We identify opportunities for future HCI research to lower the barriers to entry and democratize the authoring of motion graphics videos.

著者
Amir Jahanlou
Simon Fraser University, Surrey, British Columbia, Canada
Parmit K. Chilana
Simon Fraser University, Burnaby, British Columbia, Canada
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3517741

動画