The field of Human-Computer-Interaction (HCI) has been consistently utilizing kinematic mechanisms to create tangible dynamic interfaces and objects. However, the design and fabrication of these mechanisms are challenging due to complex spatial structures, step-by-step assembly processes, and unstable joint connections resulting from the inevitable matching errors within separated parts. In this paper, we propose an integrated fabrication method for one-step FDM 3D printing (FDM3DP) kinematic mechanisms to create dynamic objects without additional post-processing. We describe the Arch-printing and Support-bridges method, which we call All-in-One Print, that compiles given arbitrary solid 3D models into printable kinematic models as G-Code for FDM3DP. To expand the design space, we investigate a series of motion structures (e.g., rotate, slide, and screw) with multi-stabilities and develop a design tool to help users quickly design such dynamic objects. We also demonstrate various application cases, including physical interfaces, toys with interactive aesthetics and daily items with internalized functions.
https://doi.org/10.1145/3544548.3581440
Access to computer-aided fabrication tools, such as 3D printing, empowers various craft techniques to democratize the creation of artifacts. To afford new blow molding techniques in the field of Human-Computer Interaction, we make efforts to simplify this challenging handy fabrication and enrich the design space of blow molding by taking advantage of the thermoformability and heat deformability of 3D printed thermoplastics. We propose a novel and democratized blow molding technique, PneuFab, enabled by FDM 3D-printed custom structures and temporal triggering methods. Then we implement and evaluate a design tool that allows users to play with parameters and preview the resulting forms until achieving their desired shapes. Showcasing design spaces including artifacts with complex geometries and tunable stiffness, we hope to expand access and dive into what more the digital blow molding fabrication can be.
https://doi.org/10.1145/3544548.3580923
We present Kerfmeter, a hardware + software device that automatically determines how much material the laser cutter burns off, also known as kerf. Its knowledge about kerf allows Kerfmeter to make the joints of laser cut 3D models fit together with just the right tension, i.e., loose enough to allow for comfortable assembly, yet tight enough to hold parts together without glue—all this without user interaction. Kerfmeter attaches to the head of a laser cutter and works as follows: when users send a model to the laser cutter, Kerfmeter intercepts the job, injects a brief calibration routine that determines kerf, dilates the cutting plan according to this kerf, and then proceeds to fabricate the cutting plan. During the calibration routine, Kerfmeter cuts a 2cm Archimedean spiral and uses a motor to rotate it in place until it jams against the surrounding material; the angle at which the spiral jams allows Kerfmeter to infer kerf. The calibration process takes about 20s, which is >10x faster than traditional, manual kerf calibration, while also eliminating the need for expertise. In our technical evaluation, Kerfmeter produced functioning press fit joints reliably at a precision comparable to traditional manual kerf strips. Kerfmeter makes it easy to sample repeatedly; we demonstrate how this allows boosting precision past any traditional kerf strip.
https://doi.org/10.1145/3544548.3580914
We introduce ThrowIO, a novel style of actuated tangible user interface that facilitates throwing and catching spatial interaction powered by mobile wheeled robots on overhanging surfaces. In our approach, users throw and stick objects that are embedded with magnets to an overhanging ferromagnetic surface where wheeled robots can move and drop them at desired locations, allowing users to catch them. The thrown objects are tracked with an RGBD camera system to perform closed-loop robotic manipulations. By computationally facilitating throwing and catching interaction, our approach can be applied in many applications including kinesthetic learning, gaming, immersive haptic experience, ceiling storage, and communication. We demonstrate the applications with a proof-of-concept system enabled by wheeled robots, ceiling hardware design, and software control. Overall, ThrowIO opens up novel spatial, dynamic, and tangible interaction for users via overhanging robots, which has great potential to be integrated into our everyday space.
https://doi.org/10.1145/3544548.3581267
Data-physicalizations encode data and meaning through geometry or material properties, providing a non-planar view of data, offering novel opportunities for interrogation, discovery and presentation. This field has explored how single users interact with complex 3D data, but the challenges in the application of this technology to collaborative situations have not been addressed. We describe a study exploring interactions and preferences among co-located individuals using a dynamic data-physicalization in the form of a shape-changing bar chart, and compare this to previous work with single participants. Results suggest that co-located interactions with physical data prompt non-interactive hand gestures, a mirroring of physicalizations, and novel hand gestures in comparison to single participant studies. We also note that behavioural similarities in participants between interactive tabletop studies and data-physicalizations may be capitalised upon for further development of these dynamic representations. Finally, we consider the implications and challenges for the adoption of these types of platforms.
https://doi.org/10.1145/3544548.3581214
Spatial augmented reality (SAR) can extend desktop computing out of the monitor and into our surroundings, but extending the standard style of mouse input is challenging due to real-world geometry irregularity, gaps, and occlusion. We identify two general approaches for controlling a mouse cursor in SAR: perspective-based approaches based on raycasting, such as Nacenta et. al's Perspective Cursor, and geometry-based approaches that closely associate cursor movement with surface topology. For the latter, we introduce Everywhere Cursor, a geometry-based approach for indirect mouse cursor control for complex 3D surface geometry in SAR. A controlled experiment compares approaches. Results show the geometry-based Everywhere Cursor improves accuracy and precision by 29% to 60% on average in a tracing task, but when traversing long distances, the perspective-based Perspective Cursor and Raycasting techniques are 22% to 49% faster, albeit with 4% to 10% higher error rates.
https://doi.org/10.1145/3544548.3580849