Learning, Creating, and Understanding Art

会議の名前
CHI 2025
Reviving Mural Art through Generative AI: A Comparative Study of AI-Generated and Hand-Crafted Recreations
要旨

Virtual reality (VR) provides an immersive and interactive platform for presenting ancient murals, enhancing users' understanding and appreciation of these invaluable culture treasures. However, traditional hand-crafted methods for recreating murals in VR are labor-intensive, time-consuming, and require significant expertise, limiting their scalability for large-scale mural scenes. To address these challenges, we propose a comprehensive pipeline that leverages generative AI to automate the mural recreation process. This pipeline is validated by the reconstruction of Foguang Temple scene in Dunhuang Murals. A user study comparing the AI-generated scene with a hand-crafted one reveals no significant differences in presence, authenticity, engagement and enjoyment, and emotion. Additionally, our findings identify areas for improvement in AI-generated recreations, such as enhancing historical fidelity and offering customization. This work paves the way for more scalable, efficient, and accessible methods of revitalizing cultural heritage in VR, offering new opportunities for mural preservation, demonstration, and dissemination using VR.

著者
Shuo Zhao
Duke Kunshan University, Kunshan, Soochow, China
Yifei Huang
Duke Kunshan University, Kunshan, China
Xiaoyang He
School of Information Management, Wuhan, Hubei, China
Xin Tong
Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Xin Li
Duke Kunshan University, Kunshan, China
Dan Wu
Wuhan University, Wuhan, China
DOI

10.1145/3706598.3714157

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714157

動画
HarmonyCut: Supporting Creative Chinese Paper-cutting Design with Form and Connotation Harmony
要旨

Chinese paper-cutting, an Intangible Cultural Heritage (ICH), faces challenges from the erosion of traditional culture due to the prevalence of realism alongside limited public access to cultural elements. While generative AI can enhance paper-cutting design with its extensive knowledge base and efficient production capabilities, it often struggles to align content with cultural meaning due to users' and models' lack of comprehensive paper-cutting knowledge. To address these issues, we conducted a formative study (N=7) to identify the workflow and design space, including four core factors (Function, Subject Matter, Style, and Method of Expression) and a key element (Pattern). We then developed HarmonyCut, a generative AI-based tool that translates abstract intentions into creative and structured ideas. This tool facilitates the exploration of suggested related content (knowledge, works, and patterns), enabling users to select, combine, and adjust elements for creative paper-cutting design. A user study (N=16) and an expert evaluation (N=3) demonstrated that HarmonyCut effectively provided relevant knowledge, aiding the ideation of diverse paper-cutting designs and maintaining design quality within the design space to ensure alignment between form and cultural connotation.

著者
Huanchen Wang
Southern University of Science and Technology, Shenzhen, Guangdong, China
Tianrun Qiu
Southern University of Science and Technology, Shenzhen, Guangdong, China
Jiaping Li
Southern University of Science and Technology, Shenzhen, Guangdong, China
Zhicong Lu
George Mason University, Fairfax, Virginia, United States
Yuxin Ma
Southern University of Science and Technology, Shenzhen, Guangdong, China
DOI

10.1145/3706598.3714159

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714159

動画
EyeSee: Enhancing Art Appreciation through Anthropomorphic Interpretations from Multiple Perspectives
要旨

Art appreciation serves as a crucial medium for emotional communication and sociocultural dialogue. In the digital era, fostering deep user engagement on online art appreciation platforms remains a challenge. Leveraging large language models (LLMs), we present EyeSee, a system designed to engage users through anthropomorphic characters. We implemented and evaluated three modes--Narrator, Artist, and In-Situ--acting as a third-person narrator, a first-person creator, and first-person created objects, respectively, across two sessions: Narrative and Recommendation. We conducted a within-subject study with 24 participants. In the Narrative session, we found that the In-Situ and Artist modes had higher aesthetic appeal than the Narrator mode, although the Artist mode showed lower perceived usability. Additionally, from the Narrative to the Recommendation session, we found that the user-perceived relatability and believability were sustained, but the user-perceived consistency and stereotypicality changed. Our findings suggest novel implications for anthropomorphic character design in enhancing user engagement.

著者
Yongming Li
Xi'an Jiaotong University, Xi'an, China
Hangyue Zhang
University of Illinois Urbana-Champaign, Champaign, Illinois, United States
Andrea Yaoyun Cui
University of Illinois Urbana-Champaign, Champaign, Illinois, United States
Zisong Ma
University of Illinois Urbana-Champaign, Urabana-Champaign, Illinois, United States
Yunpeng Song
Xi'an Jiaotong University, Xi'an, China
Zhongmin Cai
Xi’an Jiaotong University , Xi’an , China
Yun Huang
University of Illinois at Urbana-Champaign, Champaign, Illinois, United States
DOI

10.1145/3706598.3714042

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714042

動画
“It’s Like Being On Stage”: Conveying Dancers’ Expressiveness Through A Haptic-Installed Contemporary Dance Performance
要旨

In dance performances, choreography, music and lighting are combined to convey meaning to the audience. However, this communication typically relies on visual and auditory stimuli alone. While haptic technologies have been leveraged to enhance the perception of dancers’ movements, less focus has been placed on exploring their potential in enhancing dancers’ somatic expressiveness. Through co-design activities with 5 professional contemporary dancers, we crafted an interdisciplinary combination of choreography and haptics. In total, 128 audience members watched one of three live performances while wearing custom-made haptic wristbands. From an open-ended questionnaire and interviews with audience members, we explore how the introduction of haptics deepens their embodied sensations and helps to create a sense of resonance with the dancers. Based on our findings, we discuss implications for future directions in how haptic technologies could drive innovation in dance performances from the point of view of both dancers’ creativity and audience experiences.

受賞
Best Paper
著者
Ximing Shen
Keio University Graduate School of Media Design, Yokohama, Japan
Xuan Li
Graduate School of Media Design, Yokohama, Japan
Youichi Kamiyama
Keio University Graduate School of Media Design, Yokohama, Japan
Danny Hynds
Keio University Graduate School of Media Design, Yokohama, Kanagawa, Japan
Giulia Barbareschi
Keio University, Yokohama, Japan
RAY LC
City University of Hong Kong, Hong Kong, Hong Kong
Sohei Wakisaka
Keio University Graduate School of Media Design, Tokyo, Japan
Arata Horie
Keio University Graduate School of Media Design, Yokohama, Japan
Kouta Minamizawa
Keio University Graduate School of Media Design, Yokohama, Japan
DOI

10.1145/3706598.3713321

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713321

動画
AIFiligree: A Generative AI Framework for Designing Exquisite Filigree Artworks
要旨

Filigree art, which represents typical intricate metalwork, has been captivating audiences worldwide with its delicate lace-like patterns and interwoven metal wires' refined aesthetics. Particularly, Chinese Intangible Cultural Heritage filigree craftsmanship has a unique aesthetic value in fine patterns and complex three-dimensional shapes. However, designing and creating filigree artworks is a labor-intensive and technically complex task and often requires extensive training and a deep understanding of the craft, which limits its design aesthetic and cultural continuity. Aiming to overcome these challenges, this study proposes an artificial intelligence (AI)-aided method that uses AI-generated content (AIGC) technology to accelerate the visualization process of this time-consuming and intricate craft by investigating the role of AI in craft design. First, a comprehensive study of filigree art culture is conducted to identify more than ten historic filigree techniques to obtain AI opportunities. Then, an AI-powered framework called AIFiligree is developed by optimizing culture-based labels and training parameters, enabling the generation of highly authentic fine filigree structures. Further, user workflows are introduced to support diverse design scenarios. Through user studies involving 22 filigree experts and 16 designers, we finally gained insights into AI's opportunities and challenges in cultural learning, expression, and design.

著者
Ye Tao
Hangzhou City University, Hangzhou, China
Xiaohui Fu
Hangzhou City University, Hangzhou, China
Jiaying Wu
Hangzhou City University, Hangzhou, China
Ze Bian
Hangzhou City University, Hangzhou, China
Aiyu Zhu
Zhejiang Sci-Tech University, Hangzhou, China
Qi Bao
Hangzhou City University, Hangzhou, China
Weiyue Zheng
Hangzhou City University, Hangzhou, China
Yubo Wang
Hangzhou City University, Hangzhou, China
Bin Zhu
Hangzhou City University, Hangzhou, China
Cheng Yang
Hangzhou City University, Hangzhou, China
Chuyi Zhou
Hangzhou City University, Hangzhou, China
DOI

10.1145/3706598.3713281

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713281

動画
ArtMentor: AI-Assisted Evaluation of Artworks to Explore Multimodal Large Language Models Capabilities
要旨

Can Multimodal Large Language Models (MLLMs), with capabilities in perception, recognition, understanding, and reasoning, act as independent assistants in art evaluation dialogues? Current MLLM evaluation methods, reliant on subjective human scoring or costly interviews, lack comprehensive scenario coverage. This paper proposes a process-oriented Human-Computer Interaction (HCI) space design for more accurate MLLM assessment and development. This approach aids teachers in efficient art evaluation and records interactions for MLLM capability assessment. We introduce ArtMentor, a comprehensive space integrating a dataset and three systems for optimized MLLM evaluation. It includes 380 sessions from five art teachers across nine critical dimensions. The modular system features entity recognition, review generation, and suggestion generation agents, enabling iterative upgrades. Machine learning and natural language processing ensure reliable evaluations. Results confirm GPT-4o’s effectiveness in assisting teachers in art evaluation dialogues. Our contributions are available at https://artmentor.github.io/.

著者
Chanjin Zheng
Shanghai Institute of Artificial Intelligence for Education, Shanghai, China
Zengyi Yu
East China Normal University, Shanghai, China
Yilin Jiang
Zheiiang University of Technology, Hangzhou, China
Mingzi Zhang
East China Normal University, Shanghai, China
Xunuo Lu
Zhejiang University of Technology, Hangzhou, China
Jing Jin
Zhejiang Normal University, Jinhua, China
Liteng Gao
University of Shanghai for Science and Technology, Shanghai, China
DOI

10.1145/3706598.3713274

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713274

動画
SandTouch: Empowering Virtual Sand Art in VR with AI Guidance and Emotional Relief
要旨

Sand painting is a highly aesthetic and valuable form of art but often constrained by the need for specific equipment and the associated learning curve. To address these challenges, we developed a VR sand painting system, SandTouch, offering an immersive and intuitive sand painting experience that closely mirrors the interaction with physical sand. Leveraging advanced gesture recognition technology, SandTouch allows users to create intricate sand art in a virtual environment, capturing the fine sensations of real sand manipulation along with realistic sound feedback. The integration of AI agent further enhances the experience by intelligently interpreting users' creative intentions based on real-time interactions, offering contextually relevant artistic suggestions. Comprehensive evaluations have demonstrated a significant increase in user engagement and immersion. Furthermore, the realistic sound feedback enhances emotional relief and deepens the painting experience.

著者
Long Liu
East China Normal University, Shanghai, China
Junbin Ren
East China Normal University, Shanghai, China
Zeyuan Fan
Jiangsu University of Science and Technology, Zhenjiang, China
Chenhui Li
East China Normal University, Shanghai, China
Gaoqi He
East China Normal University, Shanghai, China
Changbo Wang
School of Computer Science and Technology, Shanghai, Shanghai, China
Yang Gao
East China Normal University, Shanghai, China
Chen Li
Computer Science and Technology, Shanghai, Shanghai, China
DOI

10.1145/3706598.3714275

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714275

動画