Enhanced Videogame Livestreaming by Reconstructing an Interactive 3D Game View for Spectators
説明

Many videogame players livestream their gameplay so remote spectators can watch for enjoyment, fandom, and to learn strategies and techniques. Current approaches capture the player's rendered RGB view of the game, and then encode and stream it as a 2D live video feed. We extend this basic concept by also capturing the depth buffer, camera pose, and projection matrix from the rendering pipeline of the videogame and package them all within a MPEG-4 media container. Combining these additional data streams with the RGB view, our system builds a real-time, cumulative 3D representation of the live game environment for spectators. This enables each spectator to individually control a personal game view in 3D. This means they can watch the game from multiple perspectives, enabling a new kind of videogame spectatorship experience.

日本語まとめ
読み込み中…
読み込み中…
CatchLive: Real-time Summarization of Live Streams with Stream Content and Interaction Data
説明

Live streams usually last several hours with many viewers joining in the middle. Viewers who join in the middle often want to understand what has happened in the stream. However, catching up with the earlier parts is challenging because it is difficult to know which parts are important in the long, unedited stream while also keeping up with the ongoing stream. We present CatchLive, a system that provides a real-time summary of ongoing live streams by utilizing both the stream content and user interaction data. CatchLive provides viewers with an overview of the stream along with summaries of highlight moments with multiple levels of detail in a readable format. Results from deployments of three streams with 67 viewers show that CatchLive helps viewers grasp the overview of the stream, identify important moments, and stay engaged. Our findings provide insights into designing summarizations of live streams reflecting their characteristics.

日本語まとめ
読み込み中…
読み込み中…
FitVid: Responsive and Flexible Video Content Adaptation
説明

Mobile video-based learning attracts many learners with its mobility and ease of access. However, most lectures are designed for desktops. Our formative study reveals mobile learners' two major needs: more readable content and customizable video design. To support mobile-optimized learning, we present FitVid, a system that provides responsive and customizable video content. Our system consists of (1) an adaptation pipeline that reverse-engineers pixels to retrieve design elements (e.g., text, images) from videos, leveraging deep learning with a custom dataset, which powers (2) a UI that enables resizing, repositioning, and toggling in-video elements. The content adaptation improves the guideline compliance rate by 24% and 8% for word count and font size. The content evaluation study (n=198) shows that the adaptation significantly increases readability and user satisfaction. The user study (n=31) indicates that FitVid significantly improves learning experience, interactivity, and concentration. We discuss design implications for responsive and customizable video adaptation.

日本語まとめ
読み込み中…
読み込み中…
Katika: An End-to-End System for Authoring Amateur Explainer Motion Graphics Videos
説明

Explainer motion graphics videos that use a combination of graphical elements and movement to convey a visual message are becoming increasingly popular among amateur creators in different domains. But, to author motion graphics videos, amateurs either have to face a steep learning curve with professional design tools or struggle with re-purposing slide-sharing tools that are easier to access but have limited animation capabilities. To simplify the process of motion graphics authoring, we present the design and implementation of Katika, an end-to-end system for creating shots based on a script, adding artworks and animation from a crowdsourced library, and editing the video using semi-automated transitions. Our observational study illustrates that participants (N=11) enjoyed using Katika and, within a one-hour session, managed to create an explainer motion graphics video. We identify opportunities for future HCI research to lower the barriers to entry and democratize the authoring of motion graphics videos.

日本語まとめ
読み込み中…
読み込み中…