Engineering Real-World Interaction

[A] Paper Room 05, 2021-05-11 17:00:00~2021-05-11 19:00:00 / [B] Paper Room 05, 2021-05-12 01:00:00~2021-05-12 03:00:00 / [C] Paper Room 05, 2021-05-12 09:00:00~2021-05-12 11:00:00

会議の名前
CHI 2021
Vibrosight++: City-Scale Sensing Using Existing Retroreflective Signs and Markers
要旨

Today's smart cities use thousands of physical sensors distributed across the urban landscape to support decision making in areas such as infrastructure monitoring, public health, and resource management. These weather-hardened devices require power and connectivity, and often cost thousands just to install, let alone maintain. In this paper, we show how long-range laser vibrometry can be used for low-cost, city-scale sensing. Although typically limited to just a few meters of sensing range, the use of retroreflective markers can boost this to 1km or more. Fortuitously, cities already make extensive use of retroreflective materials for street signs, construction barriers, road studs, license plates, and many other markings. We describe how our prototype system can co-opt these existing markers at very long ranges and use them as unpowered accelerometers for use in a wide variety of sensing applications.

著者
Yang Zhang
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Sven Mayer
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Jesse T. Gonzalez
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Chris Harrison
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
DOI

10.1145/3411764.3445054

論文URL

https://doi.org/10.1145/3411764.3445054

動画
DistanciAR: Authoring Site-Specific Augmented Reality Experiences for Remote Environments
要旨

Most augmented reality (AR) authoring tools only support the author's current environment, but designers often need to create site-specific experiences for a different environment. We propose DistanciAR, a novel tablet-based workflow for remote AR authoring. Our baseline solution involves three steps. A remote environment is captured by a camera with LiDAR; then, the author creates an AR experience from a different location using AR interactions; finally, a remote viewer consumes the AR content on site. A formative study revealed understanding and navigating the remote space as key challenges with this solution. We improved the authoring interface by adding two novel modes: Dollhouse, which renders a bird's-eye view, and Peek, which creates photorealistic composite images using captured images. A second study compared this improved system with the baseline, and participants reported that the new modes made it easier to understand and navigate the remote scene.

著者
Zeyu Wang
Yale University, New Haven, Connecticut, United States
Cuong Nguyen
Adobe Research, San Francisco, California, United States
Paul Asente
Adobe, San Jose, California, United States
Julie Dorsey
Yale University, New Haven, Connecticut, United States
DOI

10.1145/3411764.3445552

論文URL

https://doi.org/10.1145/3411764.3445552

動画
HandPainter --- 3D Sketching in VR with Hand-based Physical Proxy
要旨

3D sketching in virtual reality (VR) enables users to create 3D virtual objects intuitively and immersively. However, previous studies showed that mid-air drawing may lead to inaccurate sketches. To address this issue, we propose to use one hand as a canvas proxy and the index finger of the other hand as a 3D pen. To this end, we first perform a formative study to compare two-handed interaction with tablet-pen interaction for VR sketching. Based on the findings of this study, we design HandPainter, a VR sketching system which focuses on the direct use of two hands for 3D sketching without requesting any tablet, pen, or VR controller. Our implementation is based on a pair of VR gloves, which provide hand tracking and gesture capture. We devise a set of intuitive gestures to control various functionalities required during 3D sketching, such as canvas panning and drawing positioning. We show the effectiveness of HandPainter by presenting a number of sketching results and discussing the outcomes of a user study-based comparison with mid-air drawing and tablet-based sketching tools.

著者
Ying Jiang
The University of Hong Kong, Hong Kong, Hong Kong
Congyi Zhang
The University of Hong Kong, Hong Kong, Hong Kong
Hongbo Fu
City University of Hong Kong, Hong Kong, Hong Kong
Alberto Cannavò
Politecnico di Torino, Turin, Italy
Fabrizio Lamberti
Politecnico di Torino, Torino, Italy
Henry Y K Lau
The University of Hong Kong, Hong Kong, Hong Kong
Wenping Wang
Texas A&M University, College Station, Texas, United States
DOI

10.1145/3411764.3445302

論文URL

https://doi.org/10.1145/3411764.3445302

動画
Augmenting Scientific Papers with Just-in-Time, Position-Sensitive Definitions of Terms and Symbols
要旨

Despite the central importance of research papers to scientific progress, they can be difficult to read. Comprehension is often stymied when the information needed to understand a passage resides somewhere else—in another section, or in another paper. In this work, we envision how interfaces can bring definitions of technical terms and symbols to readers when and where they need them most. We introduce ScholarPhi, an augmented reading interface with four novel features: (1) tooltips that surface position-sensitive definitions from elsewhere in a paper, (2) a filter over the paper that “declutters” it to reveal how the term or symbol is used across the paper, (3) automatic equation diagrams that expose multiple definitions in parallel, and (4) an automatically generated glossary of important terms and symbols. A usability study showed that the tool helps researchers of all experience levels read papers. Furthermore, researchers were eager to have ScholarPhi’s definitions available to support their everyday reading.

著者
Andrew Head
UC Berkeley, Berkeley, California, United States
Kyle Lo
Allen Institute for Artificial Intelligence, Seattle, Washington, United States
Dongyeop Kang
UC Berkeley, Berkeley, California, United States
Raymond Fok
University of Washington, Seattle, Washington, United States
Sam Skjonsberg
Allen Institute for AI, Seattle, Washington, United States
Daniel Weld
University of Washington, Seattle, Washington, United States
Marti Hearst
UC Berkeley, Berkeley, California, United States
DOI

10.1145/3411764.3445648

論文URL

https://doi.org/10.1145/3411764.3445648

動画
Figaro: A Tabletop Authoring Environment for Human-Robot Interaction
要旨

Human-robot interaction designers and developers navigate a complex design space, which creates a need for tools that support intuitive design processes and harness the programming capacity of state-of-the-art authoring environments. We introduce \textit{Figaro}, an expressive tabletop authoring environment for mobile robots, inspired by \textit{shadow puppetry}, that provides designers with a natural, situated representation of human-robot interactions while exploiting the intuitiveness of tabletop and tangible programming interfaces. On the tabletop, Figaro projects a representation of an environment. Users demonstrate sequences of behaviors, or \textit{scenes}, of an interaction by manipulating instrumented figurines that represent the robot and the human. During a scene, Figaro records the movement of figurines on the tabletop and narrations uttered by users. Subsequently, Figaro employs real-time program synthesis to assemble a complete robot program from all scenes provided. Through a user study, we demonstrate the ability of Figaro to support design exploration and development for human-robot interaction.

著者
David J. Porfirio
University of Wisconsin–Madison, Madison, Wisconsin, United States
Laura Stegner
University of Wisconsin-Madison, Madison, Wisconsin, United States
Maya Cakmak
University of Washington, Seattle, Washington, United States
Allison Sauppe
University of Wisconsin–La Crosse, La Crosse, Wisconsin, United States
Aws Albarghouthi
University of Wisconsin-Madison, Madison, Wisconsin, United States
Bilge Mutlu
University of Wisconsin-Madison, Madison, Wisconsin, United States
DOI

10.1145/3411764.3446864

論文URL

https://doi.org/10.1145/3411764.3446864

動画
Appliancizer: Transforming Web Pages into Electronic Devices
要旨

Prototyping electronic devices that meet today's consumer standards is a time-consuming task that requires multi-domain expertise. Consumers expect electronic devices to have visually appealing interfaces with both tactile and screen-based interfaces. Appliancizer, our interactive computational design tool, exploits the similarities between graphical and tangible interfaces, allowing web pages to be rapidly transformed into physical electronic devices. Using a novel technique we call essential interface mapping, our tool converts graphical user interface elements (e.g., an HTML button) into tangible interface components (e.g., a physical button) without changing the application source code. Appliancizer automatically generates the PCB and low-level code from web-based prototypes and HTML mock-ups. This makes the prototyping of mixed graphical-tangible interactions as easy as modifying a web page and allows designers to leverage the well-developed ecosystem of web technologies. We demonstrate how our technique simplifies and accelerates prototyping by developing two devices with Appliancizer.

著者
Jorge Garza
University of California, San Diego, La Jolla, California, United States
Devon J. Merrill
University of California, San Diego, La Jolla, California, United States
Steven Swanson
University of California, San Diego, La Jolla, California, United States
DOI

10.1145/3411764.3445732

論文URL

https://doi.org/10.1145/3411764.3445732

動画
CoCapture: Effectively Communicating UI Behaviors on Existing Websites by Demonstrating and Remixing
要旨

UI mockups are commonly used as shared context during interface development collaboration. In practice, UI designers often use screenshots and sketches to create mockups of desired UI behaviors for communication. However, in the later stages of UI development, interfaces can be arbitrarily complex, making it labor-intensive to sketch, and static screenshots are limited in the types of interactive and dynamic behaviors they can express. We introduce CoCapture, a system that allows designers to easily create UI behavior mockups on existing web interfaces by demonstrating and remixing, and to accurately describe their requests to helpers by referencing the resulting mockups using hypertext. We showed that participants could more accurately describe UI behaviors with CoCapture than with existing sketch and communication tools and that the resulting descriptions were clear and easy to follow. Our approach can help teams develop UIs efficiently by bridging communication gaps with more accurate visual context.

著者
Yan Chen
University of Michigan, Ann Arbor, Michigan, United States
Sang Won Lee
Virginia Polytechnic Institute and State University, Blacksburg, Virginia, United States
Steve Oney
University of Michigan, Ann Arbor, Michigan, United States
DOI

10.1145/3411764.3445573

論文URL

https://doi.org/10.1145/3411764.3445573

動画
AdapTutAR: an Adaptive Tutoring System for Machine Tasks using Augmented Reality
要旨

Modern manufacturing processes are in a state of flux, as they adapt to increasing demand for flexible and self-configuring production. This poses challenges for training workers to rapidly master new machine operations and processes, i.e. machine tasks. Conventional in-person training is effective but requires time and effort of experts for each worker trained and not scalable. Recorded tutorials, such as video-based or augmented reality (AR), permit more efficient scaling. However, unlike in-person tutoring, existing recorded tutorials lack the ability to adapt to workers' diverse experiences and learning behaviors. We present AdapTutAR, an adaptive task tutoring system that enables experts to record machine task tutorials via embodied demonstration and train learners with different AR tutoring contents adapting to each user's characteristics. The adaptation is achieved by continually monitoring learners' tutorial-following status and adjusting the tutoring content on-the-fly and in-situ. The results of our user study evaluation have demonstrated that our adaptive system is more effective and preferable than the non-adaptive one.

著者
Gaoping Huang
Purdue University, West Lafayette, Indiana, United States
Xun Qian
Purdue University, West Lafayette, Indiana, United States
Tianyi Wang
Purdue University, West Lafayette, Indiana, United States
Fagun Patel
Purdue University, West Lafayette, Indiana, United States
Maitreya Sreeram
Purdue University, WEST LAFAYETTE, Indiana, United States
Yuanzhi Cao
Purdue University, West Lafayette, Indiana, United States
Karthik Ramani
Purdue University, West Lafayette, Indiana, United States
Alexander J.. Quinn
Purdue University, West Lafayette, Indiana, United States
DOI

10.1145/3411764.3445283

論文URL

https://doi.org/10.1145/3411764.3445283

動画
HapticSeer: A Multi-channel, Black-box, Platform-agnostic Approach to Detecting Video Game Events for Real-time Haptic Feedback
要旨

Haptic feedback significantly enhances virtual experiences. However, supporting haptics currently requires modifying the codebase, making it impractical to add haptics to popular, high-quality experiences such as best selling games, which are typically closed-source. We present HapticSeer, a multi-channel, black-box, platform-agnostic approach to detecting game events for real-time haptic feedback. The approach is based on two key insights: 1) all games have 3 types of data streams: video, audio, and controller I/O, that can be analyzed in real-time to detect game events, and 2) a small number of user interface design patterns are reused across most games, so that event detectors can be reused effectively. We developed an open-source HapticSeer framework and implemented several real-time event detectors for commercial PC and VR games. We validated system correctness and real-time performance, and discuss feedback from several haptics developers that used the HapticSeer framework to integrate research and commercial haptic devices.

受賞
Honorable Mention
著者
Yu-Hsin Lin
National Taiwan University, Taipei City, Taiwan
Yu-Wei Wang
National Taiwan University, Taipei City, Taiwan
Pin-Sung Ku
National Taiwan University, Taipei City, Taiwan
Yun-Ting Cheng
National Taiwan University, Taipei City, Taiwan
Yuan-Chih Hsu
National Taiwan University, Taipei City, Taiwan
Ching-Yi Tsai
National Taiwan University, Taipei City, Taiwan
Mike Y.. Chen
National Taiwan University, Taipei City, Taiwan
DOI

10.1145/3411764.3445254

論文URL

https://doi.org/10.1145/3411764.3445254

動画
Itsy-Bits: Fabrication and Recognition of 3D-Printed Tangibles with Small Footprints on Capacitive Touchscreens
要旨

Tangibles on capacitive touchscreens are a promising approach to overcome the limited expressiveness of touch input. While research has suggested many approaches to detect tangibles, the corresponding tangibles are either costly or have a considerable minimal size. This makes them bulky and unattractive for many applications. At the same time, they obscure valuable display space for interaction. To address these shortcomings, we contribute Itsy-Bits: a fabrication pipeline for 3D printing and recognition of tangibles on capacitive touchscreens with a footprint as small as a fingertip. Each Itsy-Bit consists of an enclosing 3D object and a unique conductive 2D shape on its bottom. Using only raw data of commodity capacitive touchscreens, Itsy-Bits reliably identifies and locates a variety of shapes in different sizes and estimates their orientation. Through example applications and a technical evaluation, we demonstrate the feasibility and applicability of Itsy-Bits for tangibles with small footprints.

受賞
Honorable Mention
著者
Martin Schmitz
Technical University of Darmstadt, Darmstadt, Germany
Florian Müller
TU Darmstadt, Darmstadt, Germany
Max Mühlhäuser
TU Darmstadt, Darmstadt, Germany
Jan Riemann
Technical University of Darmstadt, Darmstadt, Germany
Huy Viet Le
University of Stuttgart, Stuttgart, Germany
DOI

10.1145/3411764.3445502

論文URL

https://doi.org/10.1145/3411764.3445502

動画
Oh, Snap! A Fabrication Pipeline to Magnetically Connect Conventional and 3D-Printed Electronics
要旨

3D printing has revolutionized rapid prototyping by speeding up the creation of custom-shaped objects. With the rise of multi-material 3Dprinters, these custom-shaped objects can now be made interactive in a single pass through passive conductive structures. However, connecting conventional electronics to these conductive structures often still requires time-consuming manual assembly involving many wires, soldering or gluing. To alleviate these shortcomings, we propose Oh, Snap!: a fabrication pipeline and interfacing concept to magnetically connect a 3D-printed object equipped with passive sensing structures to conventional sensing electronics. To this end, Oh, Snap! utilizes ferromagnetic and conductive 3D-printed structures, printable in a single pass on standard printers. We further present a proof-of-concept capacitive sensing board that enables easy and robust magnetic assembly to quickly create interactive 3D-printed objects. We evaluate Oh, Snap! by assessing the robustness and quality of the connection and demonstrate its broad applicability by a series of example applications.

受賞
Best Paper
著者
Martin Schmitz
Technical University of Darmstadt, Darmstadt, Germany
Jan Riemann
Technical University of Darmstadt, Darmstadt, Germany
Florian Müller
TU Darmstadt, Darmstadt, Germany
Steffen Kreis
TU Darmstadt, Darmstadt, Germany
Max Mühlhäuser
TU Darmstadt, Darmstadt, Germany
DOI

10.1145/3411764.3445641

論文URL

https://doi.org/10.1145/3411764.3445641

動画
WallTokens: Surface Tangibles for Vertical Displays
要旨

Tangibles can enrich interaction with digital surfaces. Among others, they support eyes-free control or increase awareness of other users' actions. Tangibles have been studied in combination with horizontal surfaces such as tabletops, but not with vertical screens such as wall displays. The obvious obstacle is gravity: tangibles cannot be placed on such surfaces without falling. We present WallTokens, easy-to-fabricate tangibles to interact with a vertical surface. A WallToken is a passive token whose footprint is recognized on a tactile surface. It is equipped with a push-handle that controls a suction cup. This makes it easy for users to switch between sliding the token or attaching it to the wall. We describe how to build such tokens and how to recognize them on a tactile surface. We report on a study showing the benefits of WallTokens for manipulating virtual objects over multi-touch gestures. This project is a step towards enabling tangible interaction in a wall display context.

著者
Emmanuel Courtoux
Université Paris-Saclay, CNRS, Inria, Orsay, France
Caroline Appert
Université Paris-Saclay, CNRS, Inria, Orsay, France
Olivier Chapuis
Université Paris-Saclay, CNRS, Inria, Orsay, France
DOI

10.1145/3411764.3445404

論文URL

https://doi.org/10.1145/3411764.3445404

動画
ArticuLev: An Integrated Self-Assembly Pipeline for Articulated Multi-Bead Levitation Primitives
要旨

Acoustic levitation is gaining popularity as an approach to create physicalized mid-air content by levitating different types of levitation primitives. Such primitives can be independent particles or particles that are physically connected via threads or pieces of cloth to form shapes in mid-air. However, initialization (i.e., placement of such primitives in their mid-air target locations) currently relies on either manual placement or specialized ad-hoc implementations, which limits their practical usage. We present ArticuLev, an integrated pipeline that deals with the identification, assembly and mid-air placement of levitated shape primitives. We designed ArticuLev with the physical properties of commonly used levitation primitives in mind. It enables experiences that seamlessly combine different primitives into meaningful structures (including fully articulated animated shapes) and supports various levitation display approaches (e.g., particles moving at high speed). In this paper, we describe our pipeline and demonstrate it with heterogeneous combinations of levitation primitives.

著者
Andreas Rene. Fender
ETH, Zurich, Switzerland
Diego Martinez Plasencia
University College of London, London, United Kingdom
Sriram Subramanian
University College London, London, United Kingdom
DOI

10.1145/3411764.3445342

論文URL

https://doi.org/10.1145/3411764.3445342

動画