1. Prototyping

会議の名前
UIST 2024
ProtoDreamer: A Mixed-prototype Tool Combining Physical Model and Generative AI to Support Conceptual Design
要旨

Prototyping serves as a critical phase in the industrial conceptual design process, enabling exploration of problem space and identification of solutions. Recent advancements in large-scale generative models have enabled AI to become a co-creator in this process. However, designers often consider generative AI challenging due to the necessity to follow computer-centered interaction rules, diverging from their familiar design materials and languages. Physical prototype is a commonly used design method, offering unique benefits in prototype process, such as intuitive understanding and tangible testing. In this study, we propose ProtoDreamer, a mixed-prototype tool that synergizes generative AI with physical prototype to support conceptual design. ProtoDreamer allows designers to construct preliminary prototypes using physical materials, while AI recognizes these forms and vocal inputs to generate diverse design alternatives. This tool empowers designers to tangibly interact with prototypes, intuitively convey design intentions to AI, and continuously draw inspiration from the generated artifacts. An evaluation study confirms ProtoDreamer’s utility and strengths in time efficiency, creativity support, defects exposure, and detailed thinking facilitation.

著者
Hongbo ZHANG
Zhejiang University, Hangzhou, Zhejiang, China
Pei Chen
Zhejiang University, Hangzhou, China
Xuelong Xie
School of Computer Science and Technology, Hangzhou, Zhejiang, China
Chaoyi Lin
Zhejiang University, Hangzhou, Zhejiang, China
Lianyan Liu
Zhejiang University, Hangzhou, China
Zhuoshu Li
Zhejiang University, Hangzhou, China
Weitao You
College of Computer Science and Technology, Hangzhou, Zhejiang, China
Lingyun Sun
Zhejiang University, Hangzhou, China
論文URL

https://doi.org/10.1145/3654777.3676399

動画
TorqueCapsules: Fully-Encapsulated Flywheel Actuation Modules for Designing and Prototyping Movement-Based and Kinesthetic Interaction
要旨

Flywheels are unique, versatile actuators that store and convert kinetic energy to torque, widely utilized in aerospace, robotics, haptics, and more. However, prototyping interaction using flywheels is not trivial due to safety concerns, unintuitive operation, and implementation challenges. We present TorqueCapsules: self-contained, fully-encapsulated flywheel actuation modules that make the flywheel actuators easy to control, safe to interact with, and quick to reconfigure and customize. By fully encapsulating the actuators with a wireless microcontroller, a battery, and other components, the module can be readily attached, embedded, or stuck to everyday objects, worn to people’s bodies, or combined with other devices. With our custom GUI, both novices and expert users can easily control multiple modules to design and prototype movements and kinesthetic haptics unique to flywheel actuation. We demonstrate various applications, including actuated everyday objects, wearable haptics, and expressive robots. We conducted workshops for novices and experts to employ TorqueCapsules to collect qualitative feedback and further application examples.

著者
Willa Yunqi Yang
University of Chicago, Chicago, Illinois, United States
Yifan Zou
University of Chicago, Chicago, Illinois, United States
Jingle Huang
Independent Researcher, San Jose, California, United States
Raouf Abujaber
University of Chicago, Chicago, Illinois, United States
Ken Nakagaki
University of Chicago, Chicago, Illinois, United States
論文URL

https://doi.org/10.1145/3654777.3676364

動画
AniCraft: Crafting Everyday Objects as Physical Proxies for Prototyping 3D Character Animation in Mixed Reality
要旨

We introduce AniCraft, a mixed reality system for prototyping 3D character animation using physical proxies crafted from everyday objects. Unlike existing methods that require specialized equipment to support the use of physical proxies, AniCraft only requires affordable markers, webcams, and daily accessible objects and materials. AniCraft allows creators to prototype character animations through three key stages: selection of virtual characters, fabrication of physical proxies, and manipulation of these proxies to animate the characters. This authoring workflow is underpinned by diverse physical proxies, manipulation types, and mapping strategies, which ease the process of posing virtual characters and mapping user interactions with physical proxies to animated movements of virtual characters. We provide a range of cases and potential applications to demonstrate how diverse physical proxies can inspire user creativity. User experiments show that our system can outperform traditional animation methods for rapid prototyping. Furthermore, we provide insights into the benefits and usage patterns of different materials, which lead to design implications for future research.

著者
Boyu Li
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Linping Yuan
The Hong Kong University of Science and Technology, Hong Kong, Hong Kong
Zhe Yan
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Qianxi Liu
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Yulin Shen
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Zeyu Wang
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
論文URL

https://doi.org/10.1145/3654777.3676325

動画
Mul-O: Encouraging Olfactory Innovation in Various Scenarios Through a Task-Oriented Development Platform
要旨

Olfactory interfaces are pivotal in HCI, yet their development is hindered by limited application scenarios, stifling the discovery of new research opportunities. This challenge primarily stems from existing design tools focusing predominantly on odor display devices and the creation of standalone olfactory experiences, rather than enabling rapid adaptation to various contexts and tasks. Addressing this, we introduce Mul-O, a novel task-oriented development platform crafted to aid semi-professionals in navigating the diverse requirements of potential application scenarios and effectively prototyping ideas. Mul-O facilitates the swift association and integration of olfactory experiences into functional designs, system integrations, and concept validations. Comprising a web UI for task-oriented development, an API server for seamless third-party integration, and wireless olfactory display hardware, Mul-O significantly enhances the ideation and prototyping process in multisensory tasks. This was verified by a 15-day workshop attended by 30 participants. The workshop produced seven innovative projects, underscoring Mul-O's efficacy in fostering olfactory innovation.

著者
Peizhong Gao
Tsinghua University, Beijing, China
Fan Liu
Tsinghua University, Beijing, China
Di Wen
Tsinghua University , Beijing, China
Yuze Gao
Tsinghua University, Beijing, China
Linxin Zhang
Tsinghua University, Beijing, China
Chikelei Wang
Tsinghua University, Beijing, China
Qiwei Zhang
Tsinghua University, Beijing, China
Yu Zhang
Tsinghua University, Beijing, China
Shao-en Ma
Independent Researcher, San Francisco, California, United States
Qi Lu
Tsinghua University, Beijing, China
Haipeng Mi
Tsinghua University, Beijing, China
YINGQING XU
Tsinghua University, Beijing, China
論文URL

https://doi.org/10.1145/3654777.3676387

動画