Vibrosight++: City-Scale Sensing Using Existing Retroreflective Signs and Markers
説明

Today's smart cities use thousands of physical sensors distributed across the urban landscape to support decision making in areas such as infrastructure monitoring, public health, and resource management. These weather-hardened devices require power and connectivity, and often cost thousands just to install, let alone maintain. In this paper, we show how long-range laser vibrometry can be used for low-cost, city-scale sensing. Although typically limited to just a few meters of sensing range, the use of retroreflective markers can boost this to 1km or more. Fortuitously, cities already make extensive use of retroreflective materials for street signs, construction barriers, road studs, license plates, and many other markings. We describe how our prototype system can co-opt these existing markers at very long ranges and use them as unpowered accelerometers for use in a wide variety of sensing applications.

日本語まとめ
読み込み中…
読み込み中…
DistanciAR: Authoring Site-Specific Augmented Reality Experiences for Remote Environments
説明

Most augmented reality (AR) authoring tools only support the author's current environment, but designers often need to create site-specific experiences for a different environment. We propose DistanciAR, a novel tablet-based workflow for remote AR authoring. Our baseline solution involves three steps. A remote environment is captured by a camera with LiDAR; then, the author creates an AR experience from a different location using AR interactions; finally, a remote viewer consumes the AR content on site. A formative study revealed understanding and navigating the remote space as key challenges with this solution. We improved the authoring interface by adding two novel modes: Dollhouse, which renders a bird's-eye view, and Peek, which creates photorealistic composite images using captured images. A second study compared this improved system with the baseline, and participants reported that the new modes made it easier to understand and navigate the remote scene.

日本語まとめ
読み込み中…
読み込み中…
HandPainter --- 3D Sketching in VR with Hand-based Physical Proxy
説明

3D sketching in virtual reality (VR) enables users to create 3D virtual objects intuitively and immersively. However, previous studies showed that mid-air drawing may lead to inaccurate sketches. To address this issue, we propose to use one hand as a canvas proxy and the index finger of the other hand as a 3D pen. To this end, we first perform a formative study to compare two-handed interaction with tablet-pen interaction for VR sketching. Based on the findings of this study, we design HandPainter, a VR sketching system which focuses on the direct use of two hands for 3D sketching without requesting any tablet, pen, or VR controller. Our implementation is based on a pair of VR gloves, which provide hand tracking and gesture capture. We devise a set of intuitive gestures to control various functionalities required during 3D sketching, such as canvas panning and drawing positioning. We show the effectiveness of HandPainter by presenting a number of sketching results and discussing the outcomes of a user study-based comparison with mid-air drawing and tablet-based sketching tools.

日本語まとめ
読み込み中…
読み込み中…
Augmenting Scientific Papers with Just-in-Time, Position-Sensitive Definitions of Terms and Symbols
説明

Despite the central importance of research papers to scientific progress, they can be difficult to read. Comprehension is often stymied when the information needed to understand a passage resides somewhere else—in another section, or in another paper. In this work, we envision how interfaces can bring definitions of technical terms and symbols to readers when and where they need them most. We introduce ScholarPhi, an augmented reading interface with four novel features: (1) tooltips that surface position-sensitive definitions from elsewhere in a paper, (2) a filter over the paper that “declutters” it to reveal how the term or symbol is used across the paper, (3) automatic equation diagrams that expose multiple definitions in parallel, and (4) an automatically generated glossary of important terms and symbols. A usability study showed that the tool helps researchers of all experience levels read papers. Furthermore, researchers were eager to have ScholarPhi’s definitions available to support their everyday reading.

日本語まとめ
読み込み中…
読み込み中…
Figaro: A Tabletop Authoring Environment for Human-Robot Interaction
説明

Human-robot interaction designers and developers navigate a complex design space, which creates a need for tools that support intuitive design processes and harness the programming capacity of state-of-the-art authoring environments. We introduce \textit{Figaro}, an expressive tabletop authoring environment for mobile robots, inspired by \textit{shadow puppetry}, that provides designers with a natural, situated representation of human-robot interactions while exploiting the intuitiveness of tabletop and tangible programming interfaces. On the tabletop, Figaro projects a representation of an environment. Users demonstrate sequences of behaviors, or \textit{scenes}, of an interaction by manipulating instrumented figurines that represent the robot and the human. During a scene, Figaro records the movement of figurines on the tabletop and narrations uttered by users. Subsequently, Figaro employs real-time program synthesis to assemble a complete robot program from all scenes provided. Through a user study, we demonstrate the ability of Figaro to support design exploration and development for human-robot interaction.

日本語まとめ
読み込み中…
読み込み中…
Appliancizer: Transforming Web Pages into Electronic Devices
説明

Prototyping electronic devices that meet today's consumer standards is a time-consuming task that requires multi-domain expertise. Consumers expect electronic devices to have visually appealing interfaces with both tactile and screen-based interfaces. Appliancizer, our interactive computational design tool, exploits the similarities between graphical and tangible interfaces, allowing web pages to be rapidly transformed into physical electronic devices. Using a novel technique we call essential interface mapping, our tool converts graphical user interface elements (e.g., an HTML button) into tangible interface components (e.g., a physical button) without changing the application source code. Appliancizer automatically generates the PCB and low-level code from web-based prototypes and HTML mock-ups. This makes the prototyping of mixed graphical-tangible interactions as easy as modifying a web page and allows designers to leverage the well-developed ecosystem of web technologies. We demonstrate how our technique simplifies and accelerates prototyping by developing two devices with Appliancizer.

日本語まとめ
読み込み中…
読み込み中…
CoCapture: Effectively Communicating UI Behaviors on Existing Websites by Demonstrating and Remixing
説明

UI mockups are commonly used as shared context during interface development collaboration. In practice, UI designers often use screenshots and sketches to create mockups of desired UI behaviors for communication. However, in the later stages of UI development, interfaces can be arbitrarily complex, making it labor-intensive to sketch, and static screenshots are limited in the types of interactive and dynamic behaviors they can express. We introduce CoCapture, a system that allows designers to easily create UI behavior mockups on existing web interfaces by demonstrating and remixing, and to accurately describe their requests to helpers by referencing the resulting mockups using hypertext. We showed that participants could more accurately describe UI behaviors with CoCapture than with existing sketch and communication tools and that the resulting descriptions were clear and easy to follow. Our approach can help teams develop UIs efficiently by bridging communication gaps with more accurate visual context.

日本語まとめ
読み込み中…
読み込み中…
AdapTutAR: an Adaptive Tutoring System for Machine Tasks using Augmented Reality
説明

Modern manufacturing processes are in a state of flux, as they adapt to increasing demand for flexible and self-configuring production. This poses challenges for training workers to rapidly master new machine operations and processes, i.e. machine tasks.

Conventional in-person training is effective but requires time and effort of experts for each worker trained and not scalable. Recorded tutorials, such as video-based or augmented reality (AR), permit more efficient scaling. However, unlike in-person tutoring, existing recorded tutorials lack the ability to adapt to workers' diverse experiences and learning behaviors. We present AdapTutAR, an adaptive task tutoring system that enables experts to record machine task tutorials via embodied demonstration and train learners with different AR tutoring contents adapting to each user's characteristics. The adaptation is achieved by continually monitoring learners' tutorial-following status and adjusting the tutoring content on-the-fly and in-situ. The results of our user study evaluation have demonstrated that our adaptive system is more effective and preferable than the non-adaptive one.

日本語まとめ
読み込み中…
読み込み中…
HapticSeer: A Multi-channel, Black-box, Platform-agnostic Approach to Detecting Video Game Events for Real-time Haptic Feedback
説明

Haptic feedback significantly enhances virtual experiences. However, supporting haptics currently requires modifying the codebase, making it impractical to add haptics to popular, high-quality experiences such as best selling games, which are typically closed-source. We present HapticSeer, a multi-channel, black-box, platform-agnostic approach to detecting game events for real-time haptic feedback. The approach is based on two key insights: 1) all games have 3 types of data streams: video, audio, and controller I/O, that can be analyzed in real-time to detect game events, and 2) a small number of user interface design patterns are reused across most games, so that event detectors can be reused effectively. We developed an open-source HapticSeer framework and implemented several real-time event detectors for commercial PC and VR games. We validated system correctness and real-time performance, and discuss feedback from several haptics developers that used the HapticSeer framework to integrate research and commercial haptic devices.

日本語まとめ
読み込み中…
読み込み中…
Itsy-Bits: Fabrication and Recognition of 3D-Printed Tangibles with Small Footprints on Capacitive Touchscreens
説明

Tangibles on capacitive touchscreens are a promising approach to overcome the limited expressiveness of touch input. While research has suggested many approaches to detect tangibles, the corresponding tangibles are either costly or have a considerable minimal size. This makes them bulky and unattractive for many applications. At the same time, they obscure valuable display space for interaction.

To address these shortcomings, we contribute Itsy-Bits: a fabrication pipeline for 3D printing and recognition of tangibles on capacitive touchscreens with a footprint as small as a fingertip. Each Itsy-Bit consists of an enclosing 3D object and a unique conductive 2D shape on its bottom. Using only raw data of commodity capacitive touchscreens, Itsy-Bits reliably identifies and locates a variety of shapes in different sizes and estimates their orientation. Through example applications and a technical evaluation, we demonstrate the feasibility and applicability of Itsy-Bits for tangibles with small footprints.

日本語まとめ
読み込み中…
読み込み中…
Oh, Snap! A Fabrication Pipeline to Magnetically Connect Conventional and 3D-Printed Electronics
説明

3D printing has revolutionized rapid prototyping by speeding up the creation of custom-shaped objects. With the rise of multi-material 3Dprinters, these custom-shaped objects can now be made interactive in a single pass through passive conductive structures. However, connecting conventional electronics to these conductive structures often still requires time-consuming manual assembly involving many wires, soldering or gluing.

To alleviate these shortcomings, we propose Oh, Snap!: a fabrication pipeline and interfacing concept to magnetically connect a 3D-printed object equipped with passive sensing structures to conventional sensing electronics. To this end, Oh, Snap! utilizes ferromagnetic and conductive 3D-printed structures, printable in a single pass on standard printers. We further present a proof-of-concept capacitive sensing board that enables easy and robust magnetic assembly to quickly create interactive 3D-printed objects. We evaluate Oh, Snap! by assessing the robustness and quality of the connection and demonstrate its broad applicability by a series of example applications.

日本語まとめ
読み込み中…
読み込み中…
WallTokens: Surface Tangibles for Vertical Displays
説明

Tangibles can enrich interaction with digital surfaces. Among others, they support eyes-free control or increase awareness of other users' actions. Tangibles have been studied in combination with horizontal surfaces such as tabletops, but not with vertical screens such as wall displays. The obvious obstacle is gravity: tangibles cannot be placed on such surfaces without falling. We present WallTokens, easy-to-fabricate tangibles to interact with a vertical surface. A WallToken is a passive token whose footprint is recognized on a tactile surface. It is equipped with a push-handle that controls a suction cup. This makes it easy for users to switch between sliding the token or attaching it to the wall. We describe how to build such tokens and how to recognize them on a tactile surface. We report on a study showing the benefits of WallTokens for manipulating virtual objects over multi-touch gestures. This project is a step towards enabling tangible interaction in a wall display context.

日本語まとめ
読み込み中…
読み込み中…
ArticuLev: An Integrated Self-Assembly Pipeline for Articulated Multi-Bead Levitation Primitives
説明

Acoustic levitation is gaining popularity as an approach to create physicalized mid-air content by levitating different types of levitation primitives. Such primitives can be independent particles or particles that are physically connected via threads or pieces of cloth to form shapes in mid-air. However, initialization (i.e., placement of such primitives in their mid-air target locations) currently relies on either manual placement or specialized ad-hoc implementations, which limits their practical usage. We present ArticuLev, an integrated pipeline that deals with the identification, assembly and mid-air placement of levitated shape primitives. We designed ArticuLev with the physical properties of commonly used levitation primitives in mind. It enables experiences that seamlessly combine different primitives into meaningful structures (including fully articulated animated shapes) and supports various levitation display approaches (e.g., particles moving at high speed). In this paper, we describe our pipeline and demonstrate it with heterogeneous combinations of levitation primitives.

日本語まとめ
読み込み中…
読み込み中…