While diffusion-based text-to-image (T2I) models provide a simple and powerful way to generate images, guiding this generation remains a challenge. For concepts that are difficult to describe through language, users may struggle to create prompts. Moreover, many of these models are built as end-to-end systems, lacking support for iterative shaping of the image. In response, we introduce PromptPaint, which combines T2I generation with interactions that model how we use colored paints. PromptPaint allows users to go beyond language and mix prompts to express challenging concepts. Just as we iteratively tune colors by the layered placement of paint on a physical canvas, PromptPaint similarly allows users to apply different prompts to different parts of the generative process and canvas areas. Through a set of studies, we characterize different approaches for mixing prompts, design trade-offs, and technical and socio-technical challenges to the use of generative models. With PromptPaint we provide insight into future steerable generative tools.
https://doi.org/10.1145/3586183.3606777
ACM Symposium on User Interface Software and Technology