Novel Interactions

会議の名前
UIST 2022
Flexel: A Modular Floor Interface for Room-Scale Tactile Sensing
要旨

Human environments are physically supported by floors, which prevents people and furniture from falling against gravitational pull. Since our body motions continuously generate vibrations and loads that propagate to the ground, measurement of these expressive signals leads to unobtrusive activity sensing. In this study, we present Flexel, a modular floor interface for room-scale tactile sensing. By paving a room with floor interfaces, our system can immediately begin to infer touch positions, track user locations, recognize foot gestures, and detect object locations. Through a series of exploratory studies, we figured out the preferable hardware design that adheres to construction conventions, as well as the optimal sensor density that mediates the trade-off between costs and performance. In addition, we summarize a design guideline that is generalizable to other floor interfaces. Moreover, we demonstrate example applications for room-scale tactile sensing enabled by Flexel systems.

著者
Takatoshi Yoshida
University of Tokyo, Tokyo, Japan
Narin Okazaki
University of Tokyo, Tokyo, Japan
Masaharu Hirose
University of Tokyo, Tokyo, Japan
Shingo Kitagawa
University of Tokyo, Tokyo, Japan
Ken Takaki
University of Tokyo, Tokyo, Japan
Masahiko Inami
University of Tokyo, Tokyo, Japan
論文URL

https://doi.org/10.1145/3526113.3545699

ForceSight: Non-Contact Force Sensing with Laser Speckle Imaging
要旨

Force sensing has been a key enabling technology for a wide range of interfaces such as digitally enhanced body and world surfaces for touch interactions. Additionally, force often contains rich contextual information about user activities and can be used to enhance machine perception for improved user and environment awareness. To sense force, conventional approaches rely on contact sensors made of pressure-sensitive materials such as piezo films/discs or force-sensitive resistors. We present ForceSight, a non-contact force sensing approach using laser speckle imaging. Our key observation is that object surfaces deform in the presence of force. This deformation, though very minute, manifests as observable and discernible laser speckle shifts, which we leverage to sense the applied force. This non-contact force-sensing capability opens up new opportunities for rich interactions and can be used to power user-/environment-aware interfaces. We first built and verified the model of laser speckle shift with surface deformations. To investigate the feasibility of our approach, we conducted studies on metal, plastic, wood, along with a wide variety of materials. Additionally, we included supplementary tests to fully tease out the performance of our approach. Finally, we demonstrated the applicability of ForceSight with several demonstrative example applications.

著者
Siyou Pei
University of California, Los Angeles, Los Angeles, California, United States
Pradyumna Chari
University of California, Los Angeles, Los Angeles, California, United States
Xue Wang
University of California, Los Angeles, Los Angeles, California, United States
Xiaoying Yang
University of California, Los Angeles, Los Angeles, California, United States
Achuta Kadambi
University of California, Los Angeles, Los Angeles, California, United States
Yang Zhang
University of California, Los Angeles, Los Angeles, California, United States
論文URL

https://doi.org/10.1145/3526113.3545622

NFCStack: Identifiable Physical Building Blocks that Support Concurrent Construction and Frictionless Interaction
要旨

In this paper, we propose NFCStack, which is a physical building block system that supports stacking and frictionless interaction and is based on near-field communication (NFC). This system consists of a portable station that can support and resolve the order of three types of passive identifiable stackable: bricks, boxes, and adapters. The bricks support stable and sturdy physical construction, whereas the boxes support frictionless tangible interactions. The adapters provide an interface between the aforementioned two types of stackable and convert the top of a stack into a terminal for detecting interactions between NFC-tagged objects. In contrast to existing systems based on NFC or radio-frequency identification technologies, NFCStack is portable, supports simultaneous interactions, and resolves stacking and interaction events responsively, even when objects are not strictly aligned. Evaluation results indicate that the proposed system effectively supports 12 layers of rich-ID stacking with the three types of building block, even if every box is stacked with a 6-mm offset. The results also indicate possible generalized applications of the proposed system, including 2.5-dimensional construction. The interaction styles are described using several educational application examples, and the design implications of this research are explained.

著者
Chi-Jung Lee
National Taiwan University, Taipei, Taiwan
Chi-Huan Chiang
National Taiwan University, Taipei, Taiwan
Ling-Chien Yang
National Taiwan University, Taipei, Taiwan
Te-Yen Wu
Dartmouth College, Hanover, New Hampshire, United States
Rong-Hao Liang
Eindhoven University of Technology, Eindhoven, Netherlands
Bing-Yu Chen
National Taiwan University, Taipei, Taiwan
論文URL

https://doi.org/10.1145/3526113.3545658

Gesture-aware Interactive Machine Teaching with In-situ Object Annotations
要旨

Interactive Machine Teaching (IMT) systems allow non-experts to easily create Machine Learning (ML) models. However, existing vision-based IMT systems either ignore annotations on the objects of interest or require users to annotate in a post-hoc manner. Without the annotations on objects, the model may misinterpret the objects using unrelated features. Post-hoc annotations cause additional workload, which diminishes the usability of the overall model building process. In this paper, we develop LookHere, which integrates in-situ object annotations into vision-based IMT. LookHere exploits users' deictic gestures to segment the objects of interest in real time. This segmentation information can be additionally used for training. To achieve the reliable performance of this object segmentation, we utilize our custom dataset called HuTics, including 2040 front-facing images of deictic gestures toward various objects by 170 people. The quantitative results of our user study showed that participants were 16.3 times faster in creating a model with our system compared to a standard IMT system with a post-hoc annotation process while demonstrating comparable accuracies. Additionally, models created by our system showed a significant accuracy improvement ($\Delta mIoU=0.466$) in segmenting the objects of interest compared to those without annotations.

著者
Zhongyi Zhou
The University of Tokyo, Tokyo, Japan
Koji Yatani
University of Tokyo, Tokyo, Japan
論文URL

https://doi.org/10.1145/3526113.3545648

Integrating Living Organisms in Devices to Implement Care-based Interactions
要旨

Researchers have been exploring how incorporating care-based interactions can change the user’s attitude & relationship towards an interactive device. This is typically achieved through virtual care where users care for digital entities. In this paper, we explore this concept further by investigating how physical care for a living organism, embedded as a functional component of an interactive device, also changes user-device relationships. Living organisms differ as they require an environment conducive to life, which in our concept, the user is responsible for providing by caring for the organism (e.g., feeding it). We instantiated our concept by engineering a smartwatch that includes a slime mold that physically conducts power to a heart rate sensor inside the device, acting as a living wire. In this smartwatch, the availability of heart-rate sensing depends on the health of the slime mold—with the user’s care, the slime mold becomes conductive and enables the sensor; conversely, without care, the slime mold dries and disables the sensor (resuming care resuscitates the slime mold). To explore how our living device was perceived by users, we conducted a study where participants wore our slime mold-integrated smartwatch for 9-14 days. We found that participants felt a sense of responsibility, developed a reciprocal relationship, and experienced the organism’s growth as a source of affect. Finally, to allow engineers and designers to expand on our work, we abstract our findings into a set of technical and design recommendations when engineering an interactive device that incorporates this type of care-based relationship.

著者
Jasmine Lu
University of Chicago, Chicago, Illinois, United States
Pedro Lopes
University of Chicago, Chicago, Illinois, United States
論文URL

https://doi.org/10.1145/3526113.3545629

WaddleWalls: Room-scale Interactive Partitioning System using a Swarm of Robotic Partitions
要旨

We propose WaddleWalls, a room-scale interactive partitioning system using a swarm of robotic partitions that allows occupants to interactively reconfigure workspace partitions to satisfy their privacy and interaction needs. The system can automatically arrange the partitions' layout designed by the user on demand. The user specifies the target partition's position, orientation, and height using the controller's 3D manipulations. In this work, we discuss the design consideration of the interactive partition system and implement WaddleWalls' proof-of-concept prototype assembled with off-the-shelf materials. We demonstrate the functionalities of WaddleWalls through several application scenarios in an open-planned office environment. We also conduct an initial user evaluation that compares WaddleWalls with conventional wheeled partitions, finding that WaddleWalls allows effective workspace partitioning and mitigates the physical and temporal efforts needed to fulfill ad hoc social and privacy requirements. Finally, we clarify the feasibility, potential, and future challenges of WaddleWalls through an interview with experts.

著者
Yuki Onishi
Research Institute of Electrical Communication, Tohoku University, Sendai, Japan
Kazuki Takashima
Tohoku University, Sendai, Japan
Shoi Higashiyama
Research Institute of Electrical Communication, Tohoku University, Sendai, Japan
Kazuyuki Fujita
Tohoku University, Sendai, Miyagi, Japan
Yoshifumi Kitamura
Tohoku University, Sendai, Japan
論文URL

https://doi.org/10.1145/3526113.3545615