Creative Visions: Creativity Support Tools

会議の名前
UIST 2023
Beyond the Artifact: Power as a Lens for Creativity Support Tools
要旨

Researchers who build creativity support tools (CSTs) define abstractions and software representations that align with user needs to give users the power to accomplish tasks. However, these specifications also structure and limit how users can and should think, act, and express themselves. Thus, tool designers unavoidably exert power over their users by enacting a ``normative ground'' through their tools. Drawing on interviews with 11 creative practitioners, tool designers, and CST researchers, we offer a definition of empowerment in the context of creative practice, build a preliminary theory of how power relationships manifest in CSTs, and explain why researchers have had trouble addressing these concepts in the past. We re-examine CST literature through a lens of power and argue that mitigating power imbalances at the level of technical design requires enabling users in both vertical movement along levels of abstraction as well as horizontal movement between tools through interoperable representations. A lens of power is one possible orientation that lets us recognize the methodological shifts required towards building ``artistic support tools.''

著者
Jingyi Li
Stanford University, Stanford, California, United States
Eric Rawn
University of California, Berkeley, Berkeley, California, United States
Jacob Ritchie
Stanford University, Stanford, California, United States
Jasper Tran O'Leary
University of Washington, Seattle, Washington, United States
Sean Follmer
Stanford University, Stanford, California, United States
論文URL

https://doi.org/10.1145/3586183.3606831

動画
XCreation: A Graph-Based Crossmodal Generative Creativity Support Tool
要旨

Creativity Support Tools (CSTs) aid in the efficient and effective composition of creative content, such as picture books. However, many existing CSTs only allow for mono-modal creation, whereas previous studies have become theoretically and technically mature to support multi-modal innovative creations. To overcome this limitation, we introduce XCreation, a novel CST that leverages generative AI to support cross-modal storybook creation. Nevertheless, directly deploying AI models to CSTs can still be problematic as they are mostly black-box architectures that are not comprehensible to human users. Therefore, we integrate an interpretable entity-relation graph to intuitively represent picture elements and their relations, improving the usability of the underlying generative structures. Our between-subject user study demonstrates that XCreation supports continuous plot creation with increased creativity, controllability, usability, and interpretability. XCreation is applicable to various scenarios, including interactive storytelling and picture book creation, thanks to its multimodal nature.

著者
Zihan Yan
MIT Media Lab, Cambridge, Massachusetts, United States
Chunxu Yang
UCLA, Los Angeles, California, United States
Qihao Liang
National University of Singapore, Singapore, Singapore
Xiang 'Anthony' Chen
UCLA, Los Angeles, California, United States
論文URL

https://doi.org/10.1145/3586183.3606826

動画
Interactive Flexible Style Transfer for Vector Graphics
要旨

Vector graphics are an industry-standard way to represent and share visual designs. Designers frequently source and incorporate styles from existing designs into their work. Unfortunately, popular design tools are not well suited for this task. We present VST, Vector Style Transfer, a novel design tool for flexibly transferring visual styles between vector graphics. The core of VST lies in leveraging automation while respecting designers' tastes and the subjectivity inherent to style transfer. In VST, designers tune a cross-design element correspondence and customize which style attributes to change. We report results from a user study in which designers used VST to control style transfer between several designs, including designs participants created with external tools beforehand. VST shows that enabling design correspondence tuning and customization is one way to support interactive, flexible style transfer.

著者
Jeremy Warner
UC Berkeley, Berkeley, California, United States
Kyu Won Kim
UC Berkeley, Berkeley, California, United States
Bjoern Hartmann
UC Berkeley, Berkeley, California, United States
論文URL

https://doi.org/10.1145/3586183.3606751

動画
CurveCrafter: A System for Animated Curve Manipulation
要旨

Linework on 3D animated characters is an important aspect of stylized looks for films. We present CurveCrafter, a system allowing animators to create new lines on 3D models and to edit the shape and opacity of silhouette curves. Our tools allow users to draw, redraw, erase, edit and retime user created curves. Silhouette curves can have their shape edited or reverted, and their opacity erased or revealed. Our algorithm for propagating edits over tracked silhouette curves ensures temporal consistency even as curves expand and merge. Five professional animators used our system to animate lines on three shots with different characters. Additionally, the effects lead from the short film "Pete" used our system to more easily recreate edits on a film shot. CurveCrafter was able to successfully enhance the resulting animations with additional linework.

著者
Nora S. Willett
Pixar Animation Studios, Emeryville, California, United States
Kurt Fleischer
Pixar Animation Studios, Emeryville, California, United States
Haldean Brown
Pixar Animation Studios, Emeryville, California, United States
Ilene L E
Princeton University, Princeton, New Jersey, United States
Mark Meyer
Pixar Animation Studios, Emeryville, California, United States
論文URL

https://doi.org/10.1145/3586183.3606792

動画
PColorizor: Re-coloring Ancient Chinese Paintings with Ideorealm-congruent Poems
要旨

Color restoration of ancient Chinese paintings plays a significant role in Chinese culture protection and inheritance. However, traditional color restoration is challenging and time-consuming because it requires professional restorers to conduct detailed literature reviews on numerous paintings for reference colors. After that, they have to fill in the inferred colors on the painting manually. In this paper, we present PColorizor, an interactive system that integrates advanced deep-learning models and novel visualizations to ease the difficulties of color restoration. PColorizor is established on the principle of poem-painting congruence. Given a color-fading painting, we employ both explicit and implicit color guidance implied by ideorealm-congruent poems to associate reference paintings. To enable quick navigation of color schemes extracted from the reference paintings, we introduce a novel visualization based on a mountain metaphor that shows color distribution overtime at the ideorealm and imagery levels. Moreover, we demonstrate the ideorealm understood by deep learning models through intuitive visualizations to bridge the communication gap between human restorers and deep learning models. We also adopt intelligent color-filling techniques to accelerate manual color restoration further. To evaluate PColorizor, we collaborate with domain experts to conduct two case studies to collect their feedback. The results suggest that PColorizor could be beneficial in enabling the effective restoration of color-fading paintings.

著者
Tan Tang
Zhejiang University, Hangzhou, Zhejiang, China
Yanhong Wu
Zhejiang University, Hangzhou, Zhejiang, China
Peiquan Xia
Zhejiang University, Hangzhou, Zhejiang, China
Wange Wu
Zhejiang University, Hangzhou, Zhejiang, China
Xiaosong Wang
Zhejiang University, Hangzhou, Zhejiang, China
Yingcai Wu
Zhejiang University, Hangzhou, Zhejiang, China
論文URL

https://doi.org/10.1145/3586183.3606814

動画
TaleStream: Supporting Story Ideation with Trope Knowledge
要旨

Story ideation is a critical part of the story-writing process. It is challenging to support computationally due to its exploratory and subjective nature. Tropes, which are recurring narrative elements across stories, are essential in stories as they shape the structure of narratives and our understanding of them. In this paper, we propose to use tropes as an intermediate representation of stories to approach story ideation. We present TaleStream, a canvas system that uses tropes as building blocks of stories while providing steerable suggestions of story ideas in the form of tropes. Our trope suggestion methods leverage data from the tvtropes.org wiki. We find that 97\% of the time, trope suggestions generated by our methods provide better story ideation materials than random tropes. Our system evaluation suggests that TaleStream can support writers’ creative flow and greatly facilitates story development. Tropes, as a rich lexicon of narratives with available examples, play a key role in TaleStream and hold promise for story-creation support systems.

著者
Jean-Peïc Chou
Stanford University, Stanford, California, United States
Maneesh Agrawala
Stanford University, Stanford, California, United States
Alexa F. Siu
Adobe Research, San Jose, California, United States
Nedim Lipka
Adobe Systems , San Jose, California, United States
Ryan Rossi
Adobe Research, San Jose, California, United States
Franck Dernoncourt
Adobe Research, Seattle, Washington, United States
論文URL

https://doi.org/10.1145/3586183.3606807

動画