AudioXtend: Assisted Reality Visual Accompaniments for Audiobook Storytelling During Everyday Routine Tasks

要旨

The rise of multitasking in contemporary lifestyles has positioned audio-first content as an essential medium for information consumption. We present AudioXtend, an approach to augment audiobook experiences during daily tasks by integrating glanceable, AI-generated visuals through optical see-through head-mounted displays (OHMDs). Our initial study showed that these visual augmentations not only preserved users' primary task efficiency but also dramatically enhanced immediate auditory content recall by 33.3% and 7-day recall by 32.7%, alongside a marked improvement in narrative engagement. Through participatory design workshops involving digital arts designers, we crafted a set of design principles for visual augmentations that are attuned to the requirements of multitaskers. Finally, a 3-day take-home field study further revealed new insights for everyday use, underscoring the potential of assisted reality (aR) to enhance heads-up listening and incidental learning experiences.

著者
Felicia Fang-Yi. Tan
National University of Singapore, Singapore, Singapore
Peisen Xu
National University of Singapore, Singapore, Singapore
Ashwin Ram
National University of Singapore, Singapore, Singapore
Wei Zhen Suen
National University of Singapore , Singapore, Singapore
Shengdong Zhao
National University of Singapore, Singapore, Singapore
Yun Huang
University of Illinois at Urbana-Champaign, Champaign, Illinois, United States
Christophe Hurter
Université de Toulouse, Toulouse, France
論文URL

doi.org/10.1145/3613904.3642514

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Remote Presentations: Highlight on Immersive Interactions

Remote Sessions
8 件の発表
2024-05-15 18:00:00
2024-05-16 02:20:00