73. Remote Presentations: Highlight on Input and Control Techniques

Ultrasonic Mid-Air Haptics on the Face: Effects of Lateral Modulation Frequency and Amplitude on Users’ Responses
説明

Ultrasonic mid-air haptics (UMH) has emerged as a promising technology for facial haptic applications, offering the advantage of contactless and high-resolution feedback. Despite this, previous studies have fallen short in thoroughly investigating individuals’ responses to UMH on the face. To bridge this gap, this study compares UMH feedback on various facial sites using the lateral modulation (LM) method. This method allows us to explore the impact of two LM parameters -frequency and amplitude - on both perceptual (intensity) and emotional (valence and arousal) responses. With 24 participants, positive relationships between LM amplitude and perceived intensity and arousal were observed, and the effect of LM frequency varied across facial sites. These findings not only contribute to the development of design guidelines and potential applications for UMH on the face, but also provide insights aimed to enhance the effectiveness and overall user experience in haptic interactions across diverse facial sites.

日本語まとめ
読み込み中…
読み込み中…
Model-based Evaluation of Recall-based Interaction Techniques
説明

This article tackles two challenges of the empirical evaluation of interaction techniques that rely on user memory, such as hotkeys, here coined Recall-based interaction techniques (RBITs): (1) the lack of guidance to design the associated study protocols, and (2) the difficulty of comparing evaluations performed with different protocols.

To address these challenges, we propose a model-based evaluation of RBITs. This approach relies on a computational model of human memory to (1) predict the informativeness of a particular protocol through the variance of the estimated parameters (Fisher Information) (2) compare RBITs recall performance based on the inferred parameters rather than behavioral statistics, which has the advantage of being independent of the study protocol. We also release a Python library implementing our approach to aid researchers in producing more robust and meaningful comparisons of RBITs.

日本語まとめ
読み込み中…
読み込み中…
Behavioral Differences between Tap and Swipe: Observations on Time, Error, Touch-point Distribution, and Trajectory for Tap-and-swipe Enabled Targets
説明

Existing guidelines for designing targets on smartphones often focus on single-tap operations for accurate selection. However, smartphone interfaces can support both tap and swipe actions. We explored user-performance differences between tap and swipe in two crowdsourced experiments using bar and square targets. Results indicated longer operation times, higher error rates, and significantly shifted touch points for swipe compared to tap. Our findings imply that current target-size guidelines may not apply to swipe-operated targets, and they reveal new research opportunities for swipeable-target designs.

日本語まとめ
読み込み中…
読み込み中…
Impact of Fingernails Length on Mobile Tactile Interaction
説明

Mobile users have fingernails of different lengths. This paper measures the impact of fingernail length on the use of tactile mobile phones. We first conducted interviews with participants wearing long fingernails. They reported difficulties and non-satisfactory coping strategies to hold their phone securely and acquire targets accurately. We then conducted three experiments comparing different lengths of fingernails (0 mm, 5 mm, and 10 mm). Our results quantify the drop in comfort and efficiency. We measured the range of incidental pitch angle on the surface, the comfortable and useful area of the thumb, and the target acquisition efficiency. 10 mm fingernails consistently decrease by 57 % the range of the finger pitch angle, by 36 % the comfortable area of the thumb, and by 24 % the throughput when acquiring targets. This paper contributes guidelines for future inclusive devices and techniques to also support users with long fingernails.

日本語まとめ
読み込み中…
読み込み中…
Controlling the Rooms: How People Prefer Using Gestures to Control Their Smart Homes
説明

Gesture interactions have become ubiquitous, and with increasingly reliable sensing technology we can anticipate their use in everyday environments such as smart homes. Gestures must meet users' needs and constraints in diverse scenarios to gain widespread acceptance. Although mid-air gestures have been proposed in various user contexts, it is still unclear to what extent users want to integrate them into different scenarios in their smart homes, along with the motivations driving this desire. Furthermore, it is uncertain whether users will remain consistent in their suggestions when transitioning to alternative scenarios within a smart home.

This study contributes methodologically by adapting a bottom-up frame-based design process. We offer insights into preferred devices and commands in different smart home scenarios. Using our results, we can assist in designing gestures in the smart home that are consistent with individual needs across devices and scenarios, while maximizing the reuse and transferability of gestural knowledge.

日本語まとめ
読み込み中…
読み込み中…
Grip-Reach-Touch-Repeat: A Refined Model of Grasp to Encompass One-Handed Interaction with Arbitrary Form Factor Devices
説明

We extend grasp models to encompass one-handed interaction with arbitrary shaped touchscreen devices. Current models focus on how objects are stably held by external forces. However, with touchscreen devices, we postulate that users do a trade-off between holding securely and exploring interactively. To verify this, we first conducted a qualitative study which asked participants to grasp 3D printed objects while considering its different interactivity. Results of the study confirm our hypothesis and reveal obvious change in postures. To further verify this trade-off and design interactions, we developed a simulation software capable of computing the stability of a grasp and its reachability. We conducted the second study based on the observed predominant grasps to validate our software with a glove. Results also confirm a consistent trade-off between stability and reachability. We conclude by discussing how this research can help designing computational tools focusing on hand-held interactions with arbitrary shaped touchscreen devices.

日本語まとめ
読み込み中…
読み込み中…
Take a Seat, Make a Gesture: Charting User Preferences for On-Chair and From-Chair Gesture Input
説明

We explore the chair as a referential frame for facilitating hand gesture input to control interactive systems. First, we conduct a Systematic Literature Review on the topic of interactions supported by chairs, and uncover little research on harnessing everyday chairs for input, limited to chair rotation and tilting movements. Subsequently, to understand end users' preferences for gestures performed on the chair's surface (i.e., on-chair gestures) and in the space around the chair (i.e., from-chair gestures), we conduct an elicitation study involving 54 participants, 3 widespread chair variations-armchair, office-chair, and stool,- and 15 referents encompassing common actions, digital content types, and navigation commands for interactive systems. Our findings reveal a preference for unimanual gestures implemented with strokes, hand poses, and touch input, with specific nuances and kinematic profiles according to the chair type. Based on our findings, we propose a range of implications for interactive systems leveraging on-chair and from-chair gestures.

日本語まとめ
読み込み中…
読み込み中…
Simulating Interaction Movements via Model Predictive Control
説明

We present a Model Predictive Control (MPC) framework to simulate movement in interaction with computers, focusing on mid-air pointing as an example. Starting from understanding interaction from an Optimal Feedback Control (OFC) perspective, we assume that users aim to minimize an internalized cost function, subject to the constraints imposed by the human body and the interactive system. Unlike previous approaches used in HCI, MPC can compute optimal controls for nonlinear systems. This allows to use state-of-the-art biomechanical models and handle nonlinearities that occur in almost any interactive system. Instead of torque actuation, our model employs second-order muscles acting directly at the joints. We compare three different cost functions and evaluate the simulation against user movements in a pointing study. Our results show that the combination of distance, control, and joint acceleration cost matches individual users’ movements best, and predicts movements with an accuracy that is within the between-user variance. To aid HCI researchers and designers applying our approach for different users, interaction techniques, or tasks, we make our SimMPC framework, including CFAT, a tool to identify maximum voluntary torques in joint-actuated models, publicly available, and give step-by-step instructions.

日本語まとめ
読み込み中…
読み込み中…
Exploring Experience Gaps Between Active and Passive Users During Multi-user Locomotion in VR
説明

Multi-user locomotion in VR has grown increasingly common, posing numerous challenges. A key factor contributing to these challenges is the gaps in experience between active and passive users during co-locomotion. Yet, there remains a limited understanding of how and to what extent these experiential gaps manifest in diverse multi-user co-locomotion scenarios. This paper systematically explores the gaps in physiological and psychological experience indicators between active and passive users across various locomotion situations. Such situations include when active users walk, fly by joystick, or teleport, and passive users stand still or look around. We also assess the impact of factors such as sub-locomotion type, speed/teleport-interval, motion sickness susceptibility, etc. Accordingly, we delineate acceptability disparities between active and passive users, offering insights into leveraging notable experimental findings to mitigate discomfort during co-locomotion through avoidance or intervention.

日本語まとめ
読み込み中…
読み込み中…