Fabrication, Input, Sensing

会議の名前
CHI 2023
CoilCAM: Enabling Parametric Design for Clay 3D Printing Through an Action-Oriented Toolpath Programming System
要旨

Clay 3D printing provides the benefits of digital fabrication automation and reconfigurability through a method that evokes manual clay coiling. Existing design technologies for clay 3D printing reflect the general 3D printing workflow in which solid forms are designed in CAD and then converted to a toolpath. In contrast, in hand-coiling, form is determined by the actions taken by the artist’s hands through space in response to the material. We theorize that an action-oriented approach for clay 3D printing could allow creators to design digital fabrication toolpaths that reflect clay material properties. We present CoilCAM, a domain-specific CAM programming system that supports the integrated generation of parametric forms and surface textures through mathematically defined toolpath operations. We developed CoilCAM in collaboration with ceramics professionals and evaluated CoilCAM’s relevance to manual ceramics by reinterpreting hand-made ceramic vessels. This process revealed the importance of iterative variation and embodied experience in action-oriented workflows.

受賞
Honorable Mention
著者
Sam Bourgault
University of California, Santa Barbara, Santa Barbara, California, United States
Pilar Wiley
Pilar Wiley Studio, Los Angeles, California, United States
Avi Farber
Avi Farber Studio, Taos, New Mexico, United States
Jennifer Jacobs
University of California Santa Barbara, Santa Barbara, California, United States
論文URL

https://doi.org/10.1145/3544548.3580745

動画
Feellustrator: A Design Tool for Ultrasound Mid-Air Haptics
要旨

Ultrasound mid-air haptic technology provides a large space of design possibilities, as one can modulate the ultrasound intensity in a continuous 3D space at a high speed over time. Yet, the need for programming the patterns limits rapid ideation and testing of alternatives. We present Feellustrator, a graphical design tool for quickly creating and editing ultrasound mid-air haptics. With Feellustrator, one can create custom ultrasound patterns, layer or sequence them into complex effects, project them on the user's hand, and export them for use in external programs (e.g., Unity). To create the tool, we interviewed 13 designers who had from a few months to several years of experience with ultrasound, then derived a set of requirements for supporting ultrasound design. We demonstrate the design power of Feellustrator through example applications and an evaluation with 15 participants. Then, we outline future directions for ultrasound haptic design.

著者
Hasti Seifi
Arizona State University, Tempe, Arizona, United States
Sean Chew
University of Copenhagen, Copenhagen, Denmark
Antony James. Nascè
Ultraleap, Bristol, United Kingdom
William Edward. Lowther
Ultraleap, Bristol, United Kingdom
William Frier
Ultraleap, Bristol, United Kingdom
Kasper Hornbæk
University of Copenhagen, Copenhagen, Denmark
論文URL

https://doi.org/10.1145/3544548.3580728

動画
µGlyph: a Microgesture Notation
要旨

In the active field of hand microgestures, microgesture descriptions are typically expressed informally and are accompanied by images, leading to ambiguities and contradictions. An important step in moving the field forward is a rigorous basis for precisely describing, comparing, and analyzing microgestures. Towards this goal, we propose µGlyph, a hybrid notation based on a vocabulary of events inspired by finger biomechanics. First, we investigate the expressiveness of µGlyph by building a database of 118 microgestures extracted from the literature. Second, we experimentally explore the usability of µGlyph. Participants correctly read and wrote µGlyph descriptions 90% of the time, as compared to 46% for conventional descriptions. Third we present tools that promote µGlyph usage, including a visual editor with LaTeX export. We finally describe how µGlyph can guide research on designing, developing, and evaluating microgesture interaction. Results demonstrate the strong potential of µGlyph to establish a common ground for microgesture research.

著者
Adrien Chaffangeon Caillet
Laboratoire d'Informatique de Grenoble, Grenoble, France
Alix Goguey
Université Grenoble Alpes, Grenoble, France
Laurence Nigay
Université Grenoble Alpes, Grenoble, France
論文URL

https://doi.org/10.1145/3544548.3580693

動画
InStitches: Augmenting Sewing Patterns with Personalized Material-Efficient Practice
要旨

There is a rapidly growing group of people learning to sew online. Without hands-on instruction, these learners are often left to discover the challenges and pitfalls of sewing through trial and error, which can be a frustrating and wasteful process. We present InStitches, a software tool that augments existing sewing patterns with targeted practice tasks to guide users through the skills needed to complete their chosen project. InStitches analyzes the difficulty of sewing instructions relative to a user's reported expertise in order to determine where practice will be helpful and then solves for a new pattern layout that incorporates additional practice steps while optimizing for efficient use of available materials. Our user evaluation indicates that InStitches can successfully identify challenging sewing tasks and augment existing sewing patterns with practice tasks that users find helpful, showing promise as a tool for helping those new to the craft.

著者
Mackenzie Leake
MIT CSAIL, Cambridge, Massachusetts, United States
Kathryn Jin
MIT CSAIL, Cambridge, Massachusetts, United States
Abe Davis
Cornell Tech, Cornell University, New York, New York, United States
Stefanie Mueller
MIT CSAIL, Cambridge, Massachusetts, United States
論文URL

https://doi.org/10.1145/3544548.3581499

動画
Polagons: Designing and Fabricating Polarized Light Mosaics with User-Defined Color-Changing Behaviors
要旨

Polarized light mosaics (PLMs) are color-changing structures that alter their appearance based on the orientation of incident polarized light. While a few artists have developed techniques for crafting PLMs by hand, the underlying material properties are difficult to reason about; there exist no tools to bridge the high-level design objectives with the low-level physics knowledge needed to create PLMs. In this paper, we introduce the first system for creating Polagons: machine-made PLMs crafted from cellophane with user-defined color changing behaviors. Our system includes an interface for designing and visualizing Polagons as well as a fabrication process based on laser cutting and welding that requires minimal assembly by the user. We define the design space for Polagons and demonstrate how formalizing the process for creating PLMs can enable new applications in fields such as education, data visualization, and fashion.

著者
Ticha Sethapakdi
MIT CSAIL, Cambridge, Massachusetts, United States
Laura Huang
MIT CSAIL, Cambridge, Massachusetts, United States
Vivian Hsinyueh. Chan
National Taiwan University, Taipei, Taiwan
Lung-Pan Cheng
National Taiwan University, Taipei, Taiwan
Fernando Fuzinatto. Dall'Agnol
Federal University of Santa Catarina, Blumenau, Santa Catarina, Brazil
Mackenzie Leake
MIT CSAIL, Cambridge, Massachusetts, United States
Stefanie Mueller
MIT CSAIL, Cambridge, Massachusetts, United States
論文URL

https://doi.org/10.1145/3544548.3580639

動画
ExpresSense: Exploring a Standalone Smartphone to Sense Engagement of Users from Facial Expressions Using Acoustic Sensing
要旨

Facial expressions have been considered a metric reflecting a person’s engagement with a task. While the evolution of expression detection methods is consequential, the foundation remains mostly on image processing techniques that suffer from occlusion, ambient light, and privacy concerns. In this paper, we propose ExpresSense, a lightweight application for standalone smartphones that relies on near-ultrasound acoustic signals for detecting users’ facial expressions. ExpresSense has been tested on different users in lab-scaled and large-scale studies for both posed as well as natural expressions. By achieving a classification accuracy of ≈ 75% over various basic expressions, we discuss the potential of a standalone smartphone to sense expressions through acoustic sensing.

著者
Pragma Kar
Jadavpur University, Kolkata, West Bengal, India
Shyamvanshikumar Singh
IIT Kharagpur, Kharagpur, India
Avijit Mandal
IIT Kharagpur, Kharagpur, India
Samiran Chattopadhyay
Jadavpur University, Kolkata, West Bengal, India
Sandip Chakraborty
IIT Kharagpur, India, Kharagpur, West Bengal, India
論文URL

https://doi.org/10.1145/3544548.3581235

動画