CoKnowledge: Supporting Assimilation of Time-synced Collective Knowledge in Online Science Videos

要旨

Danmaku, a system of scene-aligned, time-synced, floating comments, can augment video content to create `collective knowledge'. However, its chaotic nature often hinders viewers from effectively assimilating the collective knowledge, especially in knowledge-intensive science videos. With a formative study, we examined viewers' practices for processing collective knowledge and the specific barriers they encountered. Building on these insights, we designed a processing pipeline to filter, classify, and cluster danmaku, leading to the development of CoKnowledge -- a tool incorporating a video abstract, knowledge graphs, and supplementary danmaku features to support viewers' assimilation of collective knowledge in science videos. A within-subject study (N=24) showed that CoKnowledge significantly enhanced participants’ comprehension and recall of collective knowledge compared to a baseline with unprocessed live comments. Based on our analysis of user interaction patterns and feedback on design features, we presented design considerations for developing similar support tools.

著者
Yuanhao Zhang
Hong Kong University of Science and Technology, Hong Kong, China
Yumeng Wang
the Hong Kong University of Science and Technology, Hong Kong, Hong Kong
Xiyuan Wang
ShanghaiTech University, Shanghai, China
Changyang He
Max Planck Institute for Security and Privacy, Bochum, Germany
Chenliang Huang
New York University, Brooklyn, New York, United States
Xiaojuan Ma
Hong Kong University of Science and Technology, Hong Kong, Hong Kong
DOI

10.1145/3706598.3713682

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713682

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Knowledge Work

G416+G417
6 件の発表
2025-04-30 23:10:00
2025-05-01 00:40:00
日本語まとめ
読み込み中…