Authoring 3D scenes is a central task for spatial computing applications. Competing visions for lowering existing barriers are (1) focus on immersive, direct manipulation of 3D content or (2) leverage AI techniques that capture real scenes (3D Radiance Fields such as, NeRFs, 3D Gaussian Splatting) and modify them at a higher level of abstraction, at the cost of high latency. We unify the complementary strengths of these approaches and investigate how to integrate generative AI advances into real-time, immersive 3D Radiance Field editing. We introduce Dreamcrafter, a VR-based 3D scene editing system that: (1) provides a modular architecture to integrate generative AI algorithms; (2) combines different levels of control for creating objects, including natural language and direct manipulation; and (3) introduces proxy representations that support interaction during high-latency operations. We contribute empirical findings on control preferences and discuss how generative AI interfaces beyond text input enhance creativity in scene editing and world building.
https://dl.acm.org/doi/10.1145/3706598.3714312
Shape Cast is our novel software tool designed to simplify the creation of plaster molds for ceramic slip casting by automating the 3D modeling process. Instead of needing to model molds, Shape Cast allows artists to input a single 2D profile of the desired pot. Shape Cast uses that to generate ready-to-print 3D molds for plaster, accommodating factors such as clay shrinkage and mold structural requirements. We detail the mold generation process and associated software implementation. We provide case studies demonstrating the capabilities of Shape Cast. We opened a beta version of Shape Cast to the public and 501 users have signed up creating a total of 626 fully finalized 3D models. We detail feedback from questionnaire responses of 17 users.
https://dl.acm.org/doi/10.1145/3706598.3713866
In 3D design, specifying design objectives and visualizing complex shapes through text alone proves to be a significant challenge. Although advancements in 3D GenAI have significantly enhanced part assembly and the creation of high-quality 3D designs, many systems still to dynamically generate and edit design elements based on the shape parameters. To bridge this gap, we propose GenPara, an interactive 3D design editing system that leverages text-conditional shape parameters of part-aware 3D designs and visualizes design space within the Exploration Map and Design Versioning Tree. Additionally, among the various shape parameters generated by LLM, the system extracts and provides design outcomes within the user's regions of interest based on Bayesian inference. A user study (N = 16) revealed that GenPara enhanced the comprehension and management of designers with text-conditional shape parameters, streamlining design exploration and concretization. This improvement boosted efficiency and creativity of the 3D design process.
https://dl.acm.org/doi/10.1145/3706598.3713502
In this paper, we present Xstrings, a method for designing and fabricating 3D printed objects with integrated cable-driven mechanisms that can be printed in one go without the need for manual assembly. Xstrings supports four types of cable-driven interactions—bend, coil, screw and compress—which are activated by applying an input force to the cables. To facilitate the design of Xstrings objects, we present a design tool that allows users to embed cable-driven mechanisms into object geometries based on their desired interactions by automatically placing joints and cables inside the object. To assess our system, we investigate the effect of printing parameters on the strength of Xstrings objects and the extent to which the interactions are repeatable without cable breakage. We demonstrate the application potential of Xstrings through examples such as manipulable gripping, bionic robot manufacturing, and dynamic prototyping.
https://dl.acm.org/doi/10.1145/3706598.3714282
When designing 3D objects in 3D virtual environments using naturalistic 3D user interfaces, people use their hands to manipulate the environment and objects inside it. At the same time, people utilize their spatial thinking to understand the spatial relationship of the objects in the scene. Yet, the relationship between spatial thinking and hand actions remains unclear. Here, we present a user study with 18 participants that examines the association between 3D assembling tasks and reflective hand movements that allow people to enhance their spatial thinking. Utilizing a mixed-methods protocol, we identified nine SPATIAL HAND ACTIONS and three SPATIAL THEMES people use when designing 3D objects. Then, we analyzed a subset of the participants to understand the relationship between SPATIAL HAND ACTIONS and spatial abilities. Our results will help develop better hand-based naturalistic 3DUI that considers the spatial thinking abilities of the users.
https://dl.acm.org/doi/10.1145/3706598.3713424
Compliant mechanisms enable the creation of compact and easy-to-fabricate devices for tangible interaction. This work explores interconnected compliant mechanisms consisting of multiple joints and rigid bodies to transmit and process displacements as signals that result from physical interactions. As these devices are difficult to design due to their vast and complex design space, we developed a graph-based design algorithm and computational tool to help users program and customize such computational functions and procedurally model physical designs. When combined with active materials with actuation and sensing capabilities, these devices can also render and detect haptic interaction. Our design examples demonstrate the tool’s capability to respond to relevant HCI concepts, including building modular physical interface toolkits, encrypting tangible interactions, and customizing user augmentation for accessibility. We believe the tool will facilitate the generation of new interfaces with enriched affordance.
https://dl.acm.org/doi/10.1145/3706598.3714307