Physical interactions in Human-Computer Interaction (HCI) provide immersive ways for people to engage with technology. However, designers face challenges in integrating physical computing and modeling when designing physical interactions. We explore triboelectric material sensing, a promising technology that addresses these challenges, though its use within the design community remains underexplored. To bridge this gap, we develop a toolkit consisting of triboelectric material pairs, a mechanism taxonomy, a signal processing tool, and computer program templates. We introduce this toolkit to designers in two workshops, where reflections on the design process highlight its effectiveness and inspire innovative interaction designs. Our work contributes valuable resources and knowledge to the design community, making triboelectric sensing more accessible and fostering creativity in physical interaction design.
Traditional Programming by Demonstration (PBD) systems primarily automate tasks by recording and replaying operations on Graphical User Interfaces (GUIs), without fully considering the cognitive processes behind operations. This limits their ability to generalize tasks with interdependent operations to new contexts (e.g. collecting and summarizing introductions depending on different search keywords from varied websites). We propose TaskMind, a system that automatically identifies the semantics of operations, and the cognitive dependencies between operations from demonstrations, building a user-interpretable task graph. Users modify this graph to define new task goals, and TaskMind executes the graph to dynamically generalize new parameters for operations, with the integration of Large Language Models (LLMs). We compared TaskMind with a baseline end-to-end LLM which automates tasks from demonstrations and natural language commands, without task graph. In studies with 20 participants on both predefined and customized tasks, TaskMind significantly outperforms the baseline in both success rate and controllability.
Developing and simulating 3D prototypes is crucial in product conceptual design for ideation and presentation. Traditional methods often keep physical and virtual prototypes separate, leading to a disjointed prototype workflow. In addition, acquiring high-fidelity prototypes is time-consuming and resource-intensive, distracting designers from creative exploration. Recent advancements in generative artificial intelligence (GAI) and extended reality (XR) provided new solutions for rapid prototype transition and mixed simulation. We conducted a formative study to understand current challenges in the traditional prototype process and explore how to effectively utilize GAI and XR ability in prototype. Then we introduced FusionProtor, a mixed-prototype tool for component-level 3D prototype transition and simulation. We proposed a step-by-step generation pipeline in FusionProtor, effectively transiting 3D prototypes from physical to virtual and low- to high-fidelity for rapid ideation and iteration. We also innovated a component-level 3D creation method and applied it in XR environment for the mixed-prototype presentation and interaction. We conducted technical and user experiments to verify FusionProtor’s usability in supporting diverse designs. Our results verified that it achieved a seamless workflow between physical and virtual domains, enhancing efficiency and promoting ideation. We also explored the effect of mixed interaction on design and critically discussed its best practices for HCI community.
Unlike static and rigid user interfaces, generative and malleable user interfaces offer the potential to respond to diverse users’ goals and tasks. However, current approaches primarily rely on generating code, making it difficult for end-users to iteratively tailor the generated interface to their evolving needs. We propose employing task-driven data models—representing the essential information entities, relationships, and data within information tasks—as the foundation for UI generation. We leverage AI to interpret users’ prompts and generate the data models that describe users’ intended tasks, and by mapping the data models with UI specifications, we can create generative user interfaces. End-users can easily modify and extend the interfaces via natural language and direct manipulation, with these interactions translated into changes in the underlying model. The technical evaluation of our approach and user evaluation of the developed system demonstrate the feasibility and effectiveness of generative and malleable user interfaces.
Today’s graphical user interfaces tend to be either simple but limited, or powerful but overly complex. In order to combine power and simplicity, we introduce Substrates, which act as “places for interaction” where users can manipulate objects of interest in a principled and predictable way. Substrates structure and contain data, enforce user-defined constraints among objects and manage dependencies with other substrates. Users can “tune” and “tweak” these relationships, “curry” specialized tools or abstract relationships into interactive templates. We first define substrates and provide in-depth descriptions with examples of their key characteristics. After explaining how Substrates extend the concept of Instrumental Interaction, we apply a Generative Theory of Interaction approach to analyze and critique existing interfaces and then show how using the concepts of Instruments and Substrates inspired novel design ideas in three graduate-level HCI courses. We conclude with a discussion and directions for future work.
The overview-detail design pattern, characterized by an overview of multiple items and a detailed view of a selected item, is ubiquitously implemented across software interfaces. Designers often try to account for all users, but ultimately these interfaces settle on a single form. For instance, an overview map may display hotel prices but omit other user-desired attributes. This research instead explores the malleable overview-detail interface, one that end-users can customize to address individual needs. Our content analysis of overview-detail interfaces uncovered three dimensions of variation: content, composition, and layout, enabling us to develop customization techniques along these dimensions. For content, we developed Fluid Attributes, a set of techniques enabling users to show and hide attributes between views and leverage AI to manipulate, reformat, and generate new attributes. For composition and layout, we provided solutions to compose multiple overviews and detail views and transform between various overview and overview-detail layouts. A user study on our techniques implemented in two design probes revealed that participants produced diverse customizations and unique usage patterns, highlighting the need and broad applicability for malleable overview-detail interfaces.
Woodworkers have to navigate multiple considerations when planning a project, including available resources, skill-level, and intended effort. Do it yourself (DIY) woodworkers face these challenges most acutely because of tight material constraints and a desire for custom designs tailored to specific spaces. To address these needs, we present XR-penter, an extended reality (XR) application that supports in situ, material-aware woodworking for casual makers. Our system enables users to design virtual scrap wood assemblies directly in their workspace, encouraging sustainable practices through the use of discarded materials. Users register physical material as virtual twins, manipulate these twins into an assembly in XR (while receiving feedback on material usage and alignment with their surroundings), and preview cuts needed for fabrication. We conducted a case study and feedback sessions demonstrating that XR-penter supports improvisational workflows in practice, and found that woodworkers who prioritize material-driven and adaptive workflows would benefit most from our system.