Malleable and Adaptive Interface

会議の名前
CHI 2025
Designing Physical Interactions with Triboelectric Material Sensing
要旨

Physical interactions in Human-Computer Interaction (HCI) provide immersive ways for people to engage with technology. However, designers face challenges in integrating physical computing and modeling when designing physical interactions. We explore triboelectric material sensing, a promising technology that addresses these challenges, though its use within the design community remains underexplored. To bridge this gap, we develop a toolkit consisting of triboelectric material pairs, a mechanism taxonomy, a signal processing tool, and computer program templates. We introduce this toolkit to designers in two workshops, where reflections on the design process highlight its effectiveness and inspire innovative interaction designs. Our work contributes valuable resources and knowledge to the design community, making triboelectric sensing more accessible and fostering creativity in physical interaction design.

著者
Xin Liu
National University of Singapore, Singapore, Singapore
Chengkuo Lee
National University of Singapore, Singapore, Singapore
Clement Zheng
National University of Singapore, Singapore, Singapore
Ching Chiuan Yen
National University of Singapore, Singapore, Singapore, Singapore
DOI

10.1145/3706598.3714194

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714194

動画
From Operation to Cognition: Automatic Modeling Cognitive Dependencies from User Demonstrations for GUI Task Automation
要旨

Traditional Programming by Demonstration (PBD) systems primarily automate tasks by recording and replaying operations on Graphical User Interfaces (GUIs), without fully considering the cognitive processes behind operations. This limits their ability to generalize tasks with interdependent operations to new contexts (e.g. collecting and summarizing introductions depending on different search keywords from varied websites). We propose TaskMind, a system that automatically identifies the semantics of operations, and the cognitive dependencies between operations from demonstrations, building a user-interpretable task graph. Users modify this graph to define new task goals, and TaskMind executes the graph to dynamically generalize new parameters for operations, with the integration of Large Language Models (LLMs). We compared TaskMind with a baseline end-to-end LLM which automates tasks from demonstrations and natural language commands, without task graph. In studies with 20 participants on both predefined and customized tasks, TaskMind significantly outperforms the baseline in both success rate and controllability.

著者
Yiwen Yin
Tsinghua University, Beijing, China
Yu Mei
Tsinghua University, Beijing, China
Chun Yu
Tsinghua University, Beijing, China
Toby Jia-Jun. Li
University of Notre Dame, Notre Dame, Indiana, United States
Aamir Khan Jadoon
Tsinghua University, Beijing, China
Sixiang Cheng
Tsinghua University, Beijing, China
Weinan Shi
Tsinghua University, Beijing, China
Mohan Chen
Tsinghua University, Beijing, China
Yuanchun Shi
Tsinghua University, Beijing, China
DOI

10.1145/3706598.3713356

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713356

動画
FusionProtor: A Mixed-Prototype Tool for Component-level Physical-to-Virtual 3D Transition and Simulation
要旨

Developing and simulating 3D prototypes is crucial in product conceptual design for ideation and presentation. Traditional methods often keep physical and virtual prototypes separate, leading to a disjointed prototype workflow. In addition, acquiring high-fidelity prototypes is time-consuming and resource-intensive, distracting designers from creative exploration. Recent advancements in generative artificial intelligence (GAI) and extended reality (XR) provided new solutions for rapid prototype transition and mixed simulation. We conducted a formative study to understand current challenges in the traditional prototype process and explore how to effectively utilize GAI and XR ability in prototype. Then we introduced FusionProtor, a mixed-prototype tool for component-level 3D prototype transition and simulation. We proposed a step-by-step generation pipeline in FusionProtor, effectively transiting 3D prototypes from physical to virtual and low- to high-fidelity for rapid ideation and iteration. We also innovated a component-level 3D creation method and applied it in XR environment for the mixed-prototype presentation and interaction. We conducted technical and user experiments to verify FusionProtor’s usability in supporting diverse designs. Our results verified that it achieved a seamless workflow between physical and virtual domains, enhancing efficiency and promoting ideation. We also explored the effect of mixed interaction on design and critically discussed its best practices for HCI community.

著者
Hongbo ZHANG
Zhejiang University, Hangzhou, Zhejiang, China
Pei Chen
Zhejiang University, Hangzhou, China
Xuelong Xie
School of Computer Science and Technology, Hangzhou, Zhejiang, China
Zhaoqu Jiang
Zhejiang University, Hangzhou, China
Yifei Wu
Zhejiang University, Hangzhou, China
Zejian Li
Zhejiang University, Ningbo, Zhejiang, China
Xiaoyu Chen
Zhejiang University, Hangzhou, China
Lingyun Sun
Zhejiang University, Hangzhou, China
DOI

10.1145/3706598.3713686

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713686

Generative and Malleable User Interfaces with Generative and Evolving Task-Driven Data Model
要旨

Unlike static and rigid user interfaces, generative and malleable user interfaces offer the potential to respond to diverse users’ goals and tasks. However, current approaches primarily rely on generating code, making it difficult for end-users to iteratively tailor the generated interface to their evolving needs. We propose employing task-driven data models—representing the essential information entities, relationships, and data within information tasks—as the foundation for UI generation. We leverage AI to interpret users’ prompts and generate the data models that describe users’ intended tasks, and by mapping the data models with UI specifications, we can create generative user interfaces. End-users can easily modify and extend the interfaces via natural language and direct manipulation, with these interactions translated into changes in the underlying model. The technical evaluation of our approach and user evaluation of the developed system demonstrate the feasibility and effectiveness of generative and malleable user interfaces.

著者
Yining Cao
University of California, San Diego, San Diego, California, United States
Peiling Jiang
University of California San Diego, San Diego, California, United States
Haijun Xia
University of California, San Diego, San Diego, California, United States
DOI

10.1145/3706598.3713285

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713285

動画
Interaction Substrates: Combining Power and Simplicity in Interactive Systems
要旨

Today’s graphical user interfaces tend to be either simple but limited, or powerful but overly complex. In order to combine power and simplicity, we introduce Substrates, which act as “places for interaction” where users can manipulate objects of interest in a principled and predictable way. Substrates structure and contain data, enforce user-defined constraints among objects and manage dependencies with other substrates. Users can “tune” and “tweak” these relationships, “curry” specialized tools or abstract relationships into interactive templates. We first define substrates and provide in-depth descriptions with examples of their key characteristics. After explaining how Substrates extend the concept of Instrumental Interaction, we apply a Generative Theory of Interaction approach to analyze and critique existing interfaces and then show how using the concepts of Instruments and Substrates inspired novel design ideas in three graduate-level HCI courses. We conclude with a discussion and directions for future work.

著者
Wendy E.. Mackay
Université Paris-Saclay, CNRS, Inria, Orsay, France
Michel Beaudouin-Lafon
Université Paris-Saclay, CNRS, Inria, Orsay, France
DOI

10.1145/3706598.3714006

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714006

動画
Malleable Overview-Detail Interfaces
要旨

The overview-detail design pattern, characterized by an overview of multiple items and a detailed view of a selected item, is ubiquitously implemented across software interfaces. Designers often try to account for all users, but ultimately these interfaces settle on a single form. For instance, an overview map may display hotel prices but omit other user-desired attributes. This research instead explores the malleable overview-detail interface, one that end-users can customize to address individual needs. Our content analysis of overview-detail interfaces uncovered three dimensions of variation: content, composition, and layout, enabling us to develop customization techniques along these dimensions. For content, we developed Fluid Attributes, a set of techniques enabling users to show and hide attributes between views and leverage AI to manipulate, reformat, and generate new attributes. For composition and layout, we provided solutions to compose multiple overviews and detail views and transform between various overview and overview-detail layouts. A user study on our techniques implemented in two design probes revealed that participants produced diverse customizations and unique usage patterns, highlighting the need and broad applicability for malleable overview-detail interfaces.

著者
Bryan Min
University of California San Diego, San Diego, California, United States
Allen Chen
University of California San Diego, San Diego, California, United States
Yining Cao
University of California, San Diego, San Diego, California, United States
Haijun Xia
University of California, San Diego, San Diego, California, United States
DOI

10.1145/3706598.3714164

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714164

動画
XR-penter: Material-Aware and In Situ Design of Scrap Wood Assemblies
要旨

Woodworkers have to navigate multiple considerations when planning a project, including available resources, skill-level, and intended effort. Do it yourself (DIY) woodworkers face these challenges most acutely because of tight material constraints and a desire for custom designs tailored to specific spaces. To address these needs, we present XR-penter, an extended reality (XR) application that supports in situ, material-aware woodworking for casual makers. Our system enables users to design virtual scrap wood assemblies directly in their workspace, encouraging sustainable practices through the use of discarded materials. Users register physical material as virtual twins, manipulate these twins into an assembly in XR (while receiving feedback on material usage and alignment with their surroundings), and preview cuts needed for fabrication. We conducted a case study and feedback sessions demonstrating that XR-penter supports improvisational workflows in practice, and found that woodworkers who prioritize material-driven and adaptive workflows would benefit most from our system.

著者
Ramya Iyer
Georgia Institute of Technology, Atlanta, Georgia, United States
Mustafa Doga Dogan
Adobe Research, Basel, Switzerland
Maria Larsson
University of Tokyo, Tokyo, Japan
Takeo Igarashi
The University of Tokyo, Tokyo, Japan
DOI

10.1145/3706598.3713331

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713331

動画