Creativity Tools

会議の名前
CHI 2024
EyeGuide & EyeConGuide: Gaze-based Visual Guides to Improve 3D Sketching Systems
要旨

Visual guides help to align strokes and raise accuracy in Virtual Reality (VR) sketching tools. Automatic guides that appear at relevant sketching areas are convenient to have for a seamless sketching with a guide. We explore guides that exploit eye-tracking to render them adaptive to the user's visual attention. EyeGuide and EyeConGuide cause visual grid fragments to appear spatially close to the user's intended sketches, based on the information of the user's eye-gaze direction and the 3D position of the hand. Here we evaluated the techniques in two user studies across simple and complex sketching objectives in VR. The results show that gaze-based guides have a positive effect on sketching accuracy, perceived usability and preference over manual activation in the tested tasks. Our research contributes to integrating gaze-contingent techniques for assistive guides and presents important insights into multimodal design applications in VR.

著者
Rumeysa Turkmen
Kadir Has University, Istanbul, Turkey
Zeynep Ecem Gelmez
Kadir Has University, Istanbul, Turkey
Anil Ufuk Batmaz
Concordia University, Montreal, Quebec, Canada
Wolfgang Stuerzlinger
Simon Fraser University, Vancouver, British Columbia, Canada
Paul Asente
Adobe Research, retired, Redwood City, California, United States
Mine Sarac
Kadir Has University, Istanbul, Turkey
Ken Pfeuffer
Aarhus University, Aarhus, Denmark
Mayra Donaji. Barrera Machuca
Dalhousie University, Halifax, Nova Scotia, Canada
論文URL

doi.org/10.1145/3613904.3641947

動画
Formulating or Fixating: Effects of Examples on Problem Solving Vary as a Function of Example Presentation Interface Design
要旨

Interactive systems that facilitate exposure to examples can augment problem solving performance. However designers of such systems are often faced with many practical design decisions about how users will interact with examples, with little clear theoretical guidance. To understand how example interaction design choices affect whether/how people benefit from examples, we conducted an experiment where 182 participants worked on a controlled analog to an exploratory creativity task, with access to examples of varying diversity and presentation interfaces. Task performance was worse when examples were presented in a list, compared to contextualized in the exploration space or shown in a dropdown list. Example lists were associated with more fixation, whereas contextualized examples were associated with using examples to formulate a model of the problem space to guide exploration. We discuss implications of these results for a theoretical framework that maps design choices to fundamental psychological mechanisms of creative inspiration from examples.

著者
Joel Chan
University of Maryland, College Park, Maryland, United States
Zijian Ding
University of Maryland, West Hyattsville, Maryland, United States
Eesh Kamrah
University of Maryland, Potomac, Maryland, United States
Mark Fuge
University of Maryland, College Park, Maryland, United States
論文URL

doi.org/10.1145/3613904.3642653

動画
GenQuery: Supporting Expressive Visual Search with Generative Models
要旨

Designers rely on visual search to explore and develop ideas in early design stages. However, designers can struggle to identify suitable text queries to initiate a search or to discover images for similarity-based search that can adequately express their intent. We propose GenQuery, a novel system that integrates generative models into the visual search process. GenQuery can automatically elaborate on users' queries and surface concrete search directions when users only have abstract ideas. To support precise expression of search intents, the system enables users to generatively modify images and use these in similarity-based search. In a comparative user study (N=16), designers felt that they could more accurately express their intents and find more satisfactory outcomes with GenQuery compared to a tool without generative features. Furthermore, the unpredictability of generations allowed participants to uncover more diverse outcomes. By supporting both convergence and divergence, GenQuery led to a more creative experience.

著者
Kihoon Son
KAIST, Daejeon, Korea, Republic of
DaEun Choi
KAIST, Daejeon, Korea, Republic of
Tae Soo Kim
KAIST, Daejeon, Korea, Republic of
Young-Ho Kim
NAVER AI Lab, Seongnam, Gyeonggi, Korea, Republic of
Juho Kim
KAIST, Daejeon, Korea, Republic of
論文URL

doi.org/10.1145/3613904.3642847

動画
Inkeraction: An Interaction Modality Powered by Ink Recognition and Synthesis
要旨

Ink is a powerful medium for note-taking and creativity tasks. Multi-touch devices and stylus input have enabled digital ink to be editable and searchable. To extend the capabilities of digital ink, we introduce Inkeraction, an interaction modality powered by ink recognition and synthesis. Inkeraction segments and classifies digital ink objects (e.g., handwriting and sketches), identifies relationships between them, and generates strokes in different writing styles. Inkeraction reshapes the design space for digital ink by enabling features that include: (1) assisting users to manipulate ink objects, (2) providing word-processor features such as spell checking, (3) automating repetitive writing tasks such as transcribing, and (4) bridging with generative models' features such as brainstorming. Feedback from two user studies with a total of 22 participants demonstrated that Inkeraction supported writing activities by enabling participants to write faster with fewer steps and achieve better writing quality.

著者
Lei Shi
Google Research, Mountain View, California, United States
Rachel Campbell
Google Inc., Melbourne, Australia
Peggy Chi
Google Research, Mountain View, California, United States
Maria Cirimele
Google Inc., Stockholm, Sweden
Mike Cleron
Google Research, Mountain View, California, United States
Kirsten Climer
Google Research, Mountain View, California, United States
Chelsey Q. Fleming
Google Research, West Hollywood, California, United States
Ashwin Ganti
Google Research, Mountain View, California, United States
Philippe Gervais
Google Research, Zurich, Switzerland
Pedro Gonnet
Google Research, Zurich, Switzerland
Tayeb A. Karim
Google Inc., Cambridge, Massachusetts, United States
Andrii Maksai
Google Research, Zurich, Switzerland
Chris Melancon
Google Research, Mountain View, California, United States
Rob Mickle
Google Inc., Boulder, Colorado, United States
Claudiu Musat
Google Research, Zurich, Switzerland
Palash Nandy
Google Research, Mountain View, California, United States
Xiaoyu Iris Qu
Google Research, New York, New York, United States
David Robishaw
Google Research, Mountain View, California, United States
Angad Singh
Google Research, Cambridge, Massachusetts, United States
Mathangi Venkatesan
Google Research, Mountain View, California, United States
論文URL

doi.org/10.1145/3613904.3642498

動画
Personalizing Products with Stylized Head Portraits for Self-Expression
要旨

Personalizing products aesthetically or functionally can help users increase personal relevance and support self-expression. However, using non-abstract personal data such as head portraits for product personalization has been understudied. While recent advances in Artificial Intelligence have enabled generating stylized head portraits, these images also raise concerns about lack of control, artificiality, and ethics, which potentially limit their broader use. In this work, we present PicMe, a design support tool that converts user face photos into stylized head portraits as vector graphics that can be used to personalize products. To enable style transfer, PicMe leverages a deep-learning-based algorithm trained on an extended open-source illustration dataset of characters in a cartoonish and minimalistic style. We evaluated PicMe through two experiments and a user study. The results of our evaluation showed that PicMe can help create personalized head portraits that support self-expression.

著者
Yang Shi
College of Design and Innovation, Tongji University, Shanghai, China
Yechun Peng
Tongji University, Shanghai, China
Shengqi Dang
College of Design and Innovation, Tongji University, Shanghai, China
Nanxuan Zhao
Harvard University, Cambridge, Massachusetts, United States
Nan Cao
Tongji College of Design and Innovation, Shanghai, China
論文URL

doi.org/10.1145/3613904.3642391

動画