TwistLens: A Docent-Informed Image Transformation to Create Previews That Prompt Anticipation and Interpretive Experiences Before Museum Visits

要旨

Pre-visit information can enrich museum experiences, yet creates a dilemma: text-only descriptions can overwhelm without visual anchors, while viewing artworks in advance can spoil surprise. To address this tension, we introduce TwistLens, a docent-informed, AI-supported image transformation system that generates twisted previews--transformed images that convey interpretive cues while concealing original visuals. TwistLens extracts key cues from docent text using a structured taxonomy, then applies two strategies: EchoLens, which preserves intended description while altering representation, and DecoyLens, which distorts described information while maintaining representational coherence. A co-design study identified strategy preferences by information type, informing category-specific refinements. A controlled evaluation further showed that TwistLens preserves anticipation, triggers curiosity, and supports active learning without visual spoil. These findings demonstrate how semantically-aware image transformation can balance knowledge delivery and anticipation in museum contexts.

受賞
Honorable Mention
著者
Thao Phuong Vu
Yonsei University, Seoul, Korea, Republic of
Bokyung Lee
Yonsei University, Seoul, Korea, Republic of

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Heritage, Memory, & Speculative Narratives

P1 - Room 133
7 件の発表
2026-04-16 20:15:00
2026-04-16 21:45:00