この勉強会は終了しました。ご参加ありがとうございました。
Haptic Feedback is essential for lifelike Virtual Reality (VR) experiences. To provide a wide range of matching sensations of being touched or stroked, current approaches typically need large numbers of different physical textures. However, even advanced devices can only accommodate a limited number of textures to remain wearable. Therefore, a better understanding is necessary of how expectations elicited by different visualizations affect haptic perception, to achieve a balance between physical constraints and great variety of matching physical textures.
In this work, we conducted an experiment (N=31) assessing how the perception of roughness is affected within VR. We designed a prototype for arm stroking and compared the effects of different visualizations on the perception of physical textures with distinct roughnesses. Additionally, we used the visualizations' real-world materials, no-haptics and vibrotactile feedback as baselines. As one result, we found that two levels of roughness can be sufficient to convey a realistic illusion.
A significant drawback of text passwords for end-user authentication is password reuse. We propose a novel approach to detect password reuse by leveraging gaze as well as typing behavior and study its accuracy. We collected gaze and typing behavior from 49 users while creating accounts for 1) a webmail client and 2) a news website. While most participants came up with a new password, 32% reported having reused an old password when setting up their accounts. We then compared different ML models to detect password reuse from the collected data. Our models achieve an accuracy of up to 87.7% in detecting password reuse from gaze, 75.8% accuracy from typing, and 88.75% when considering both types of behavior. We demonstrate that \revised{using gaze, password} reuse can already be detected during the registration process, before users entered their password. Our work paves the road for developing novel interventions to prevent password reuse.
What do pedestrian crossings, ATMs, elevators and ticket machines have in common? These are just a few of the ubiquitous yet essential elements of public-space infrastructure that rely on physical buttons or touchscreens; common interactions that, until recently, were considered perfectly safe to perform. This work investigates how we might integrate touchless technologies into public-space infrastructure in order to minimise physical interaction with shared devices in light of the ongoing COVID-19 pandemic. Drawing on an ethnographic exploration into how public utilities are being used, adapted or avoided, we developed and evaluated a suite of technology probes that can be either retro tted into, or replace, these services. In-situ community deployments of our probes demonstrate strong uptake and provide insight into how hands-free technologies can be adapted and utilised for the public domain; and, in turn, used to inform the future of walk-up-and use public technologies.
Advanced technologies are increasingly enabling the creation of interactive devices with non-rectangular form-factors but it is currently unclear what alternative form-factors are desirable for end users. We contribute an understanding of the interplay between the rationale for the form factors of such devices and their interactive content through think aloud design sessions in which participants could mold devices as they wished using clay. We analysed their qualitative reflections on how the shapes affected interaction. Using thematic analysis, we identified shape features desirable on handheld freeform devices and discuss the particularity of three themes central to the choice of form factors: freeform dexterity, shape features discoverability and shape adaptability (to the task and context). In a second study following the same experimental set-up, we focused on the trade off between dexterity and discoverability and the relation to the concept of affordance. Our work reveals the shape features that impact the most the choice of grasps on freeform devices from which we derive design guidelines for the design of such devices.
Knowledge of users' affective states can improve their interaction with smartphones by providing more personalized experiences (e.g., search results and news articles). We present an affective state classification model based on data gathered on smartphones in real-world environments. From touch events during keystrokes and the signals from the inertial sensors, we extracted two-dimensional heat maps as input into a convolutional neural network to predict the affective states of smartphone users. For evaluation, we conducted a data collection in the wild with 82 participants over 10 weeks. Our model accurately predicts three levels (low, medium, high) of valence (AUC up to 0.83), arousal (AUC up to 0.85), and dominance (AUC up to 0.84). We also show that using the inertial sensor data alone, our model achieves a similar performance (AUC up to 0.83), making our approach less privacy-invasive. By personalizing our model to the user, we show that performance increases by an additional 0.07 AUC.