VIVID: Human-AI Collaborative Authoring of Vicarious Dialogues from Lecture Videos

要旨

The lengthy monologue-style online lectures cause learners to lose engagement easily. Designing lectures in a “vicarious dialogue” format can foster learners’ cognitive activities more than monologue-style. However, designing online lectures in a dialogue style catered to the diverse needs of learners is laborious for instructors. We conducted a design workshop with eight educational experts and seven instructors to present key guidelines and the potential use of large language models (LLM) to transform a monologue lecture script into pedagogically meaningful dialogue. Applying these design guidelines, we created VIVID which allows instructors to collaborate with LLMs to design, evaluate, and modify pedagogical dialogues. In a within-subjects study with instructors (N=12), we show that VIVID helped instructors select and revise dialogues efficiently, thereby supporting the authoring of quality dialogues. Our findings demonstrate the potential of LLMs to assist instructors with creating high-quality educational dialogues across various learning stages.

著者
Seulgi Choi
KAIST, Daejeon, Korea, Republic of
Hyewon Lee
KAIST, Daejeon, Korea, Republic of
Yoonjoo Lee
KAIST, Daejeon, Korea, Republic of
Juho Kim
KAIST, Daejeon, Korea, Republic of
論文URL

doi.org/10.1145/3613904.3642867

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Education and AI B

321
5 件の発表
2024-05-15 23:00:00
2024-05-16 00:20:00