Pre-visit information can enrich museum experiences, yet creates a dilemma: text-only descriptions can overwhelm without visual anchors, while viewing artworks in advance can spoil surprise. To address this tension, we introduce TwistLens, a docent-informed, AI-supported image transformation system that generates twisted previews--transformed images that convey interpretive cues while concealing original visuals. TwistLens extracts key cues from docent text using a structured taxonomy, then applies two strategies: EchoLens, which preserves intended description while altering representation, and DecoyLens, which distorts described information while maintaining representational coherence. A co-design study identified strategy preferences by information type, informing category-specific refinements. A controlled evaluation further showed that TwistLens preserves anticipation, triggers curiosity, and supports active learning without visual spoil. These findings demonstrate how semantically-aware image transformation can balance knowledge delivery and anticipation in museum contexts.
ACM CHI Conference on Human Factors in Computing Systems