EyeGuide & EyeConGuide: Gaze-based Visual Guides to Improve 3D Sketching Systems
説明

Visual guides help to align strokes and raise accuracy in Virtual Reality (VR) sketching tools. Automatic guides that appear at relevant sketching areas are convenient to have for a seamless sketching with a guide. We explore guides that exploit eye-tracking to render them adaptive to the user's visual attention. EyeGuide and EyeConGuide cause visual grid fragments to appear spatially close to the user's intended sketches, based on the information of the user's eye-gaze direction and the 3D position of the hand. Here we evaluated the techniques in two user studies across simple and complex sketching objectives in VR. The results show that gaze-based guides have a positive effect on sketching accuracy, perceived usability and preference over manual activation in the tested tasks. Our research contributes to integrating gaze-contingent techniques for assistive guides and presents important insights into multimodal design applications in VR.

日本語まとめ
読み込み中…
読み込み中…
Formulating or Fixating: Effects of Examples on Problem Solving Vary as a Function of Example Presentation Interface Design
説明

Interactive systems that facilitate exposure to examples can augment problem solving performance. However designers of such systems are often faced with many practical design decisions about how users will interact with examples, with little clear theoretical guidance. To understand how example interaction design choices affect whether/how people benefit from examples, we conducted an experiment where 182 participants worked on a controlled analog to an exploratory creativity task, with access to examples of varying diversity and presentation interfaces. Task performance was worse when examples were presented in a list, compared to contextualized in the exploration space or shown in a dropdown list. Example lists were associated with more fixation, whereas contextualized examples were associated with using examples to formulate a model of the problem space to guide exploration. We discuss implications of these results for a theoretical framework that maps design choices to fundamental psychological mechanisms of creative inspiration from examples.

日本語まとめ
読み込み中…
読み込み中…
GenQuery: Supporting Expressive Visual Search with Generative Models
説明

Designers rely on visual search to explore and develop ideas in early design stages. However, designers can struggle to identify suitable text queries to initiate a search or to discover images for similarity-based search that can adequately express their intent. We propose GenQuery, a novel system that integrates generative models into the visual search process. GenQuery can automatically elaborate on users' queries and surface concrete search directions when users only have abstract ideas. To support precise expression of search intents, the system enables users to generatively modify images and use these in similarity-based search. In a comparative user study (N=16), designers felt that they could more accurately express their intents and find more satisfactory outcomes with GenQuery compared to a tool without generative features. Furthermore, the unpredictability of generations allowed participants to uncover more diverse outcomes. By supporting both convergence and divergence, GenQuery led to a more creative experience.

日本語まとめ
読み込み中…
読み込み中…
Inkeraction: An Interaction Modality Powered by Ink Recognition and Synthesis
説明

Ink is a powerful medium for note-taking and creativity tasks. Multi-touch devices and stylus input have enabled digital ink to be editable and searchable. To extend the capabilities of digital ink, we introduce Inkeraction, an interaction modality powered by ink recognition and synthesis. Inkeraction segments and classifies digital ink objects (e.g., handwriting and sketches), identifies relationships between them, and generates strokes in different writing styles. Inkeraction reshapes the design space for digital ink by enabling features that include: (1) assisting users to manipulate ink objects, (2) providing word-processor features such as spell checking, (3) automating repetitive writing tasks such as transcribing, and (4) bridging with generative models' features such as brainstorming. Feedback from two user studies with a total of 22 participants demonstrated that Inkeraction supported writing activities by enabling participants to write faster with fewer steps and achieve better writing quality.

日本語まとめ
読み込み中…
読み込み中…
Personalizing Products with Stylized Head Portraits for Self-Expression
説明

Personalizing products aesthetically or functionally can help users increase personal relevance and support self-expression. However, using non-abstract personal data such as head portraits for product personalization has been understudied. While recent advances in Artificial Intelligence have enabled generating stylized head portraits, these images also raise concerns about lack of control, artificiality, and ethics, which potentially limit their broader use. In this work, we present PicMe, a design support tool that converts user face photos into stylized head portraits as vector graphics that can be used to personalize products. To enable style transfer, PicMe leverages a deep-learning-based algorithm trained on an extended open-source illustration dataset of characters in a cartoonish and minimalistic style. We evaluated PicMe through two experiments and a user study. The results of our evaluation showed that PicMe can help create personalized head portraits that support self-expression.

日本語まとめ
読み込み中…
読み込み中…