Creativity: Visualizations and AI

会議の名前
CHI 2024
IntentTuner: An Interactive Framework for Integrating Human Intentions in Fine-tuning Text-to-Image Generative Models
要旨

Fine-tuning facilitates the adaptation of text-to-image generative models to novel concepts (e.g., styles and portraits), empowering users to forge creatively customized content. Recent efforts on fine-tuning focus on reducing training data and lightening computation overload but neglect alignment with user intentions, particularly in manual curation of multi-modal training data and intent-oriented evaluation. Informed by a formative study with fine-tuning practitioners for comprehending user intentions, we propose IntentTuner, an interactive framework that intelligently incorporates human intentions throughout each phase of the fine-tuning workflow. IntentTuner enables users to articulate training intentions with imagery exemplars and textual descriptions, automatically converting them into effective data augmentation strategies. Furthermore, IntentTuner introduces novel metrics to measure user intent alignment, allowing intent-aware monitoring and evaluation of model training. Application exemplars and user studies demonstrate that IntentTuner streamlines fine-tuning, reducing cognitive effort and yielding superior models compared to the common baseline tool.

著者
Xingchen Zeng
Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Ziyao Gao
Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China/Guangdong, China
Yilin Ye
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, Guangdong, China
Wei Zeng
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, Guangdong, China
論文URL

doi.org/10.1145/3613904.3642165

動画
Table Illustrator: Puzzle-based interactive authoring of plain tables
要旨

Plain tables excel at displaying data details and are widely used in data presentation, often polished to an elaborate appearance for readability in many scenarios. However, existing authoring tools fail to provide both flexible and efficient support for altering the table layout and styles, motivating us to develop an intuitive and swift tool for table prototyping. To this end, we contribute Table Illustrator, a table authoring system taking a novel visual metaphor, puzzle, as the primary interaction unit. Through combinations and configurations on puzzles, the system enables rapid table construction and supports a diverse range of table layouts and styles. The tool design is informed by practical challenges and requirements from interviews with 10 table practitioners and a structured design space based on an analysis of over 2,500 real-world tables. User studies showed that Table Illustrator achieved comparable performance to Microsoft Excel while reducing users' completion time and perceived workload.

著者
Yanwei Huang
Zhejiang University, Hangzhou, Zhejiang, China
Yurun Yang
Zhejiang University, Hangzhou, China
Xinhuan Shu
Newcastle University, Newcastle Upon Tyne, United Kingdom
Ran Chen
Zhejiang University, Hangzhou, Zhejiang, China
Di Weng
Zhejiang University, Hangzhou, China
Yingcai Wu
Zhejiang University, Hangzhou, Zhejiang, China
論文URL

doi.org/10.1145/3613904.3642415

動画
Is It AI or Is It Me? Understanding Users’ Prompt Journey with Text-to-Image Generative AI Tools
要旨

Generative Artificial Intelligence (AI) has witnessed unprecedented growth in text-to-image AI tools. Yet, much remains unknown about users' prompt journey with such tools in the wild. In this paper, we posit that designing human-centered text-to-image AI tools requires a clear understanding of how individuals intuitively approach crafting prompts, and what challenges they may encounter. To address this, we conducted semi-structured interviews with 19 existing users of a text-to-image AI tool. Our findings (1) offer insights into users’ prompt journey including structures and processes for writing, evaluating, and refining prompts in text-to-image AI tools and (2) indicate that users must overcome barriers to aligning AI to their intents, and mastering prompt crafting knowledge. From the findings, we discuss the prompt journey as an individual yet a social experience and highlight opportunities for aligning text-to-image AI tools and users’ intents.

著者
Atefeh Mahdavi Goloujeh
Georgia Institute of Technology, Atlanta, Georgia, United States
Anne Sullivan
Georgia Institute of Technology, Atlanta, Georgia, United States
Brian Magerko
Georgia Tech, Atlanta, Georgia, United States
論文URL

doi.org/10.1145/3613904.3642861

動画
PromptCharm: Text-to-Image Generation through Multi-modal Prompting and Refinement
要旨

The recent advancements in Generative AI have significantly advanced the field of text-to-image generation. The state-of-the-art text-to-image model, Stable Diffusion, is now capable of synthesizing high-quality images with a strong sense of aesthetics. Crafting text prompts that align with the model's interpretation and the user's intent thus becomes crucial. However, prompting remains challenging for novice users due to the complexity of the stable diffusion model and the non-trivial efforts required for iteratively editing and refining the text prompts. To address these challenges, we propose PromptCharm, a mixed-initiative system that facilitates text-to-image creation through multi-modal prompt engineering and refinement. To assist novice users in prompting, PromptCharm first automatically refines and optimizes the user's initial prompt. Furthermore, PromptCharm supports the user in exploring and selecting different image styles within a large database. To assist users in effectively refining their prompts and images, PromptCharm renders model explanations by visualizing the model's attention values. If the user notices any unsatisfactory areas in the generated images, they can further refine the images through model attention adjustment or image inpainting within the rich feedback loop of PromptCharm. To evaluate the effectiveness and usability of PromptCharm, we conducted a controlled user study with 12 participants and an exploratory user study with another 12 participants. These two studies show that participants using PromptCharm were able to create images with higher quality and better aligned with the user's expectations compared with using two variants of PromptCharm that lacked interaction or visualization support.

著者
Zhijie Wang
University of Alberta, Edmonton, Alberta, Canada
Yuheng Huang
University of Alberta, Edmonton, Alberta, Canada
Da Song
University of Alberta, Edmonton, Alberta, Canada
Lei Ma
The University of Tokyo, Tokyo, Japan
Tianyi Zhang
Purdue University, West Lafayette, Indiana, United States
論文URL

doi.org/10.1145/3613904.3642803

動画
An Accessible, Three-Axis Plotter for Enhancing Calligraphy Learning through Generated Motion
要旨

An Accessible, Three-Axis Plotter for Enhancing Calligraphy Learning through Generated Motion

受賞
Honorable Mention
著者
Cathy Mengying Fang
MIT Media Lab, Cambridge, Massachusetts, United States
Lingdong Huang
Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
Quincy Kuang
Harvard, Cambridge, Massachusetts, United States
Zach Lieberman
MIT, Cambridge, Massachusetts, United States
Pattie Maes
MIT Media Lab, Cambridge, Massachusetts, United States
Hiroshi Ishii
MIT, Cambridge, Massachusetts, United States
論文URL

doi.org/10.1145/3613904.3642792

動画