Clay 3D printing provides the benefits of digital fabrication automation and reconfigurability through a method that evokes manual clay coiling. Existing design technologies for clay 3D printing reflect the general 3D printing workflow in which solid forms are designed in CAD and then converted to a toolpath. In contrast, in hand-coiling, form is determined by the actions taken by the artist’s hands through space in response to the material. We theorize that an action-oriented approach for clay 3D printing could allow creators to design digital fabrication toolpaths that reflect clay material properties. We present CoilCAM, a domain-specific CAM programming system that supports the integrated generation of parametric forms and surface textures through mathematically defined toolpath operations. We developed CoilCAM in collaboration with ceramics professionals and evaluated CoilCAM’s relevance to manual ceramics by reinterpreting hand-made ceramic vessels. This process revealed the importance of iterative variation and embodied experience in action-oriented workflows.
Ultrasound mid-air haptic technology provides a large space of design possibilities, as one can modulate the ultrasound intensity in a continuous 3D space at a high speed over time. Yet, the need for programming the patterns limits rapid ideation and testing of alternatives. We present Feellustrator, a graphical design tool for quickly creating and editing ultrasound mid-air haptics. With Feellustrator, one can create custom ultrasound patterns, layer or sequence them into complex effects, project them on the user's hand, and export them for use in external programs (e.g., Unity). To create the tool, we interviewed 13 designers who had from a few months to several years of experience with ultrasound, then derived a set of requirements for supporting ultrasound design. We demonstrate the design power of Feellustrator through example applications and an evaluation with 15 participants. Then, we outline future directions for ultrasound haptic design.
In the active field of hand microgestures, microgesture descriptions are typically expressed informally and are accompanied by images, leading to ambiguities and contradictions. An important step in moving the field forward is a rigorous basis for precisely describing, comparing, and analyzing microgestures. Towards this goal, we propose µGlyph, a hybrid notation based on a vocabulary of events inspired by finger biomechanics. First, we investigate the expressiveness of µGlyph by building a database of 118 microgestures extracted from the literature. Second, we experimentally explore the usability of µGlyph. Participants correctly read and wrote µGlyph descriptions 90% of the time, as compared to 46% for conventional descriptions. Third we present tools that promote µGlyph usage, including a visual editor with LaTeX export. We finally describe how µGlyph can guide research on designing, developing, and evaluating microgesture interaction. Results demonstrate the strong potential of µGlyph to establish a common ground for microgesture research.
There is a rapidly growing group of people learning to sew online. Without hands-on instruction, these learners are often left to discover the challenges and pitfalls of sewing through trial and error, which can be a frustrating and wasteful process. We present InStitches, a software tool that augments existing sewing patterns with targeted practice tasks to guide users through the skills needed to complete their chosen project. InStitches analyzes the difficulty of sewing instructions relative to a user's reported expertise in order to determine where practice will be helpful and then solves for a new pattern layout that incorporates additional practice steps while optimizing for efficient use of available materials. Our user evaluation indicates that InStitches can successfully identify challenging sewing tasks and augment existing sewing patterns with practice tasks that users find helpful, showing promise as a tool for helping those new to the craft.
Polarized light mosaics (PLMs) are color-changing structures that alter their appearance based on the orientation of incident polarized light. While a few artists have developed techniques for crafting PLMs by hand, the underlying material properties are difficult to reason about; there exist no tools to bridge the high-level design objectives with the low-level physics knowledge needed to create PLMs. In this paper, we introduce the first system for creating Polagons: machine-made PLMs crafted from cellophane with user-defined color changing behaviors. Our system includes an interface for designing and visualizing Polagons as well as a fabrication process based on laser cutting and welding that requires minimal assembly by the user. We define the design space for Polagons and demonstrate how formalizing the process for creating PLMs can enable new applications in fields such as education, data visualization, and fashion.
Facial expressions have been considered a metric reflecting a person’s engagement with a task. While the evolution of expression detection methods is consequential, the foundation remains mostly on image processing techniques that suffer from occlusion, ambient light, and privacy concerns. In this paper, we propose ExpresSense, a lightweight application for standalone smartphones that relies on near-ultrasound acoustic signals for detecting users’ facial expressions. ExpresSense has been tested on different users in lab-scaled and large-scale studies for both posed as well as natural expressions. By achieving a classification accuracy of ≈ 75% over various basic expressions, we discuss the potential of a standalone smartphone to sense expressions through acoustic sensing.