Haptic Experience (HX) is a proposed set of quality criteria useful to haptics, with prior evidence for a 5-factor model with vibrotactile feedback. We report on an ongoing process of scale development to measure HX, and explore whether these criteria hold when applied to more diverse devices, including vibrotactile, force feedback, surface haptics, and mid-air haptics. From an in-person user study with 430 participants, exploratory factor analysis (EFA), and confirmatory factor analysis (CFA), we extract an 11-item and 4-factor model (Realism, Harmony, Involvement, Expressivity) with only a partial overlap to the previous model. We compare this model to the previous vibrotactile model, finding that the new 4-factor model is more generalized and can guide attributes or applications of new haptic systems. Our findings suggest that HX may vary depending on the modalities used in an application, but these four factors are general constructs that might overlap with modality-specific concepts of HX. These factors can inform designers about the right quality criteria to use when designing or evaluating haptic experiences for multiple modalities.
Deformable interfaces provide unique interaction potential for force input, for example, when users physically push into a soft display surface.
However, there remains limited understanding of which visual-tactile design elements signify the presence and stiffness of such deformable force-input components.
In this paper, we explore how people correspond surface stiffness to colours, graphical shapes, and physical shapes.
We conducted a cross-modal correspondence (CC) study, where 30 participants associated different surface stiffnesses with colours and shapes.
Our findings evidence the CCs between stiffness levels for a subset of the 2D/3D shapes and colours used in the study. We distil our findings in three design recommendations: (1) lighter colours should be used to indicate soft surfaces, and darker colours should indicate stiff surfaces; (2) rounded shapes should be used to indicate soft surfaces, while less-curved shapes should be used to indicate stiffer surfaces, and; (3) longer 2D drop-shadows should be used to indicate softer surfaces, while shorter drop-shadows should be used to indicate stiffer surfaces.
This paper investigates how different textile materials on the wrist bands may mediate the affective experience of two forms of tactile icons, namely stroking and squeezing. To this end, we designed Emoband, a wrist-worn textile-based tactile display that can stroke
or squeeze on wearers’ wrists through the moving fabrics. We conducted two studies on the participants’ valence and arousal ratings towards the stroking and squeezing feedback from Emoband, respectively. Results showed that the valence ratings were significantly affected by the stroking or squeezing parameters and the wristband’s material, interactively. The movement-related parameters dominated the arousal rating. Meanwhile, the valence ratings demonstrated different tendencies between the two tactile feedback,
with the valence increasing and decreasing along with the strength of stroking and squeezing, respectively. Besides, the valence ratings of the stroking stimuli demonstrated varied distributions across different materials, indicating the materials’ varying affective expressiveness in the wrist-worn haptic devices.
Gaze is promising for hands-free interaction on mobile devices. However, it is not clear how gaze interaction methods compare to each other in mobile settings. This paper presents the first experiment in a mobile setting that compares three of the most commonly used gaze interaction methods: Dwell time, Pursuits, and Gaze gestures. In our study, 24 participants selected one of 2, 4, 9, 12 and 32 targets via gaze while sitting and while walking. Results show that input using Pursuits is faster than Dwell time and Gaze gestures especially when there are many targets. Users prefer Pursuits when stationary, but prefer Dwell time when walking. While selection using Gaze gestures is more demanding and slower when there are many targets, it is suitable for contexts where accuracy is more important than speed. We conclude with guidelines for the design of gaze interaction on handheld mobile devices.
We propose a novel method for seamlessly identifying users by combining thermal and visible feet features. While it is known that users’ feet have unique characteristics, these have so far been underutilized for biometric identification, as observing those features often requires the removal of shoes and socks. As thermal cameras are becoming ubiquitous, we foresee a new form of identification, using feet features and heat traces to reconstruct the footprint even while wearing shoes or socks. We collected a dataset of users’ feet (𝑁 = 21), wearing three types of footwear (personal shoes, standard shoes, and socks) on three floor types (carpet, laminate, and linoleum). By combining visual and thermal features, an AUC between 91.1% and 98.9%, depending on floor type and shoe type can be achieved, with personal shoes on linoleum floor performing best. Our findings demonstrate the potential of thermal imaging for continuous and unobtrusive user identification.
Perceiving objects' and avatars’ weight in Virtual Reality (VR) is important to understand their properties and naturally interact with them. However, commercial VR controllers cannot render weight. Controllers presented by previous work are single-handed, slow, or only render a small mass. In this paper, we present PumpVR that renders weight by varying the controllers’ mass according to the properties of virtual objects or bodies. Using a bi-directional pump and solenoid valves, the system changes the controllers' absolute weight by transferring water in or out with an average error of less than 5%. We implemented VR use cases with objects and avatars of different weights to compare the system with standard controllers. A study with 24 participants revealed significantly higher realism and enjoyment when using PumpVR to interact with virtual objects. Using the system to render body weight had significant effects on virtual embodiment, perceived exertion, and self-perceived fitness.