DataLev: Mid-air Data Physicalisation Using Acoustic Levitation
説明

Data physicalisation is a technique that encodes data through the geometric and material properties of an artefact, allowing users to engage with data in a more immersive and multi-sensory way. However, current methods of data physicalisation are limited in terms of their reconfigurability and the types of materials that can be used. Acoustophoresis—a method of suspending and manipulating materials using sound waves—offers a promising solution to these challenges. In this paper, we present DataLev, a design space and platform for creating reconfigurable, multimodal data physicalisations with enriched materiality using acoustophoresis. We demonstrate the capabilities of DataLev through eight examples and evaluate its performance in terms of reconfigurability and materiality. Our work offers a new approach to data physicalisation, enabling designers to create more dynamic, engaging, and expressive artefacts.

日本語まとめ
読み込み中…
読み込み中…
In-vehicle Performance and Distraction for Midair and Touch Directional Gestures
説明

We compare the performance and level of distraction of expressive directional gesture input in the context of in-vehicle system commands. Center console touchscreen swipes and midair swipe-like movements are tested in 8-directions, with 8-button touchscreen tapping as a baseline. Participants use these input methods for intermittent target selections while performing the Lane Change Task in a virtual driving simulator. Input performance is measured with time and accuracy, cognitive load with deviation of lane position and speed, and distraction from frequency of off-screen glances. Results show midair gestures were less distracting and faster, but with lower accuracy. Touchscreen swipes and touchscreen tapping are comparable across measures. Our work provides empirical evidence for vehicle interface designers and manufacturers considering midair or touch directional gestures for centre console input.

日本語まとめ
読み込み中…
読み込み中…
Enabling Voice-Accompanying Hand-to-Face Gesture Recognition with Cross-Device Sensing
説明

Gestures performed accompanying the voice are essential for voice interaction to convey complementary semantics for interaction purposes such as wake-up state and input modality. In this paper, we investigated voice-accompanying hand-to-face (VAHF) gestures for voice interaction. We targeted on hand-to-face gestures because such gestures relate closely with speech and yield significant acoustic features (e.g., impeding voice propagation). We conducted a user study to explore the design space of VAHF gestures, where we first gathered candidate gestures and then applied a structural analysis to them in different dimensions (e.g., contact position and type), outputting a total of 8 VAHF gestures with good usability and least confusion. To facilitate VAHF gesture recognition, we proposed a novel cross-device sensing method that leverages heterogeneous channels (vocal, ultrasound, and IMU) of data from commodity devices (earbuds, watches, and rings). Our recognition model achieved an accuracy of 97.3\% for recognizing 3 gestures and 91.5\% for recognizing 8 gestures \revision{(excluding the "empty" gesture)}, proving the high applicability. Quantitative analysis also shed light on the recognition capability of each sensor channel and their different combinations. In the end, we illustrated the feasible use cases and their design principles to demonstrate the applicability of our system in various scenarios.

日本語まとめ
読み込み中…
読み込み中…
Evaluating Across-Hinge Dragging with Pen and Touch on Curved and Foldable Displays
説明

Foldable touch screens are increasingly popular, but little research has explored how the hinge impacts usability and performance. We evaluate across- and along-hinge drag gestures on a series of prototypes emulating foldable all-screen laptops with a curved hinge radius ranging from 1mm to 24mm. Results show that using a large 24mm hinge radius instead of a small 1mm hinge radius can decrease drag time by 13% and movement variability by 7% for touch input. However, hinge radius had no effect on performance for pen input. Further, we found that dragging along the hinge was up to 30% faster than dragging across the hinge, especially when dragging across at an acute angle to the hinge. Using these results, we demonstrate use cases for across- and along-hinge gestures. Our findings provide guidance for hardware and interaction designers seeking to create foldable touchscreen devices and their accompanying software.

日本語まとめ
読み込み中…
読み込み中…
Generating Real-Time, Selective, and Multimodal Haptic Effects from Sound for Gaming Experience Enhancement
説明

We propose an algorithm that generates a vibration, an impact, or a vibration+impact haptic effect by processing a sound signal in real time. Our algorithm is selective in that it matches the most appropriate type of haptic effects to the sound using a machinelearning classifier (random forest) that is built on expert-labeled datasets. Our algorithm is tailored to enhance user experiences for video game play, and we present two examples for the RPG (roleplaying game) and FPS (first-person shooter) genres. We demonstrate the effectiveness of our algorithm by a user study in comparison to other state-of-the-art (SOTA) methods for the same crossmodal conversion. Our system elicits better multisensory user experiences than the SOTA algorithms for both game genres.

日本語まとめ
読み込み中…
読み込み中…
Varying Subjective Speed-accuracy Biases to Evaluate the Generalizability of Experimental Findings on Pointing-facilitation Techniques
説明

In typical experiments to evaluate novel pointing-facilitation techniques, participants are asked to perform a task as rapidly and accurately as possible. However, the balance can differ among participants, and the techniques' effectiveness would change if the majority of participants give weight to either speed or accuracy. We investigated the effects of three subjective biases (emphasizing speed, neutral, and emphasizing accuracy) on the evaluation results of pointing-facilitation techniques, namely Bubble Cursor and Bayesian Touch Criterion (BTC). The results indicate that Bubble Cursor outperformed the baseline in terms of movement time and error rate under all bias conditions, while BTC underperformed a simpler target-prediction technique, which was an inconsistent outcome to the original study. Examining multiple biases enables researchers to discuss the (dis)advantages of novel or existing techniques more precisely, which can be beneficial to reach a more reliable conclusion.

日本語まとめ
読み込み中…
読み込み中…