Providing haptic feedback in virtual reality to make the experience more realistic has become a strong focus of research in recent years. The resulting haptic feedback systems differ greatly in their technologies, feedback possibilities, and overall realism making it challenging to compare different systems. We propose the Haptic Fidelity Framework providing the means to describe, understand and compare haptic feedback systems. The framework locates a system in the spectrum of providing realistic or abstract haptic feedback using the Haptic Fidelity dimension. It comprises 14 criteria that either describe foundational or limiting factors. A second Versatility dimension captures the current trade-off between highly realistic but application-specific and more abstract but widely applicable feedback. To validate the framework, we compared the Haptic Fidelity score to the perceived feedback realism of evaluations from 38 papers and found a strong correlation suggesting the framework accurately describes the realism of haptic feedback.
https://dl.acm.org/doi/abs/10.1145/3491102.3501953
Voice interactive devices often use keyword spotting for device activation. However, this approach suffers from misrecognition of keywords and can respond to keywords not intended for calling the device (e.g., "You can ask Alexa about it."), causing accidental device activations. We propose a method that leverages prosodic features to differentiate calling/not-calling voices (F1 score: 0.869), allowing devices to respond only when called upon to avoid misactivation. As a proof of concept, we built a prototype smart speaker called Aware that allows users to control the device activation by speaking the keyword in specific prosody patterns. These patterns are chosen to represent people's natural calling/not-calling voice, which are uncovered in a study to collect such voices and investigate their prosodic difference. A user study comparing Aware with Amazon Echo shows Aware can activate more correctly (F1 score 0.93 vs. 0.56 ) and is easy to learn and use.
https://dl.acm.org/doi/abs/10.1145/3491102.3517687
This paper's goal is to understand the haptic-visual congruency perception of skin-slip on the fingertips given visual cues in Virtual Reality (VR). We developed SpinOcchio ('Spin' for the spinning mechanism used, 'Occhio' for the Italian word “eye”), a handheld haptic controller capable of rendering the thickness and slipping of a virtual object pinched between two fingers. This is achieved using a mechanism with spinning and pivoting disks that apply a tangential skin-slip movement to the fingertips. With SpinOcchio, we determined the baseline haptic discrimination threshold for skin-slip, and, using these results, we tested how haptic realism of motion and thickness is perceived with varying visual cues in VR. Surprisingly, the results show that in all cases, visual cues dominate over haptic perception. Based on these results, we suggest applications that leverage skin-slip and grip interaction, contributing further to realistic experiences in VR.
https://dl.acm.org/doi/abs/10.1145/3491102.3517724
Multimodal Interfaces (MMIs) combining speech and spatial input have the potential to elicit minimal cognitive load. Low cognitive load increases effectiveness as well as user satisfaction and is regarded as an important aspect of intuitive use. While this potential has been extensively theorized in the research community, experiments that provide supporting observations based on functional interfaces are still scarce. In particular, there is a lack of studies comparing the commonly used Unimodal Interfaces (UMIs) with theoretically superior synergistic MMI alternatives. Yet, these studies are an essential prerequisite for generalizing results, developing practice-oriented guidelines, and ultimately exploiting the potential of MMIs in a broader range of applications. This work contributes a novel observation towards the resolution of this shortcoming in the context of the following combination of applied interaction techniques, tasks, application domain, and technology: We present a comprehensive evaluation of a synergistic speech & touch MMI and a touch-only menu-based UMI (interaction techniques) for selection and system control tasks in a digital tabletop game (application domain) on an interactive surface (technology). Cognitive load, user experience, and intuitive use are evaluated, with the former being assessed by means of the dual-task paradigm. Our experiment shows that the implemented MMI causes significantly less cognitive load and is perceived significantly more usable and intuitive than the UMI. Based on our results, we derive recommendations for the interface design of digital tabletop games on interactive surfaces. Further, we argue that our results and design recommendations are suitable to be generalized to other application domains on interactive surfaces for selection and system control tasks.
https://dl.acm.org/doi/abs/10.1145/3491102.3502062
One can embed a vibration actuator to a physical button and augment the physical button's original kinesthetic response with a programmable vibration generated by the actuator. Such vibration-augmented buttons inherit the advantages of both physical and virtual buttons. This paper reports the information transmission capacity of vibration-augmented buttons. It was obtained by conducting a series of absolute identification experiments while increasing the number of augmented buttons. The information transmission capacity found was 2.6 bits, and vibration-augmented and physical buttons showed similar abilities in rendering easily recognizable haptic responses. In addition, we showcase a VR text entry application that utilizes vibration-augmented buttons. Our method provides several error messages to the user during text entry using a VR controller that includes an augmented button. We validate that the variable haptic feedback improves task performance, cognitive workload, and user experience for a transcription task.
https://dl.acm.org/doi/abs/10.1145/3491102.3501849