Extended Reality (XR) technology is changing online interactions, but its granular data collection sensors may be more invasive to user privacy than web, mobile, and the Internet of Things technologies. Despite an increased interest in studying developers' concerns about XR device privacy, user perceptions have rarely been addressed. We surveyed 464 XR users to assess their awareness, concerns, and coping strategies around XR data in 18 scenarios. Our findings demonstrate that many factors, such as data types and sensitivity, affect users' perceptions of privacy in XR. However, users' limited awareness of XR sensors' granular data collection capabilities, such as involuntary body signals of emotional responses, restricted the range of privacy-protective strategies they used. Our results highlight a need to enhance users' awareness of data privacy threats in XR, design privacy-choice interfaces tailored to XR environments, and develop transparent XR data practices.
https://doi.org/10.1145/3613904.3642104
Personalization improves user experience by tailoring interactions relevant to each user's background and preferences. However, personalization requires information about users that platforms often collect without their awareness or their enthusiastic consent. Here, we study how the transparency of AI inferences on users' personal data affects their privacy decisions and sentiments when sharing data for personalization. We conducted two experiments where participants (N=877) answered questions about themselves for personalized public arts recommendations. Participants indicated their consent to let the system use their inferred data and explicitly provided data after awareness of inferences. Our results show that participants chose restrictive consent decisions for sensitive and incorrect inferences about them and for their answers that led to such inferences. Our findings expand existing privacy discourse to inferences and inform future directions for shaping existing consent mechanisms in light of increasingly pervasive AI inferences.
https://doi.org/10.1145/3613904.3642180
Behavioral Biometrics in Virtual Reality (VR) enable implicit user identification by leveraging the motion data of users' heads and hands from their interactions in VR. This spatiotemporal data forms a Kinetic Signature, which is a user-dependent behavioral biometric trait. Although kinetic signatures have been widely used in recent research, the factors contributing to their degree of identifiability remain mostly unexplored. Drawing from existing literature, this work systematically examines the influence of static and dynamic components in human motion. We conducted a user study (N = 24) with two sessions to reidentify users across different VR sports and exercises after one week. We found that the identifiability of a kinetic signature depends on its inherent static and dynamic factors, with the best combination allowing for 90.91 % identification accuracy after one week had passed. Therefore, this work lays a foundation for designing and refining movement-based identification protocols in immersive environments.
https://doi.org/10.1145/3613904.3642471
Data breaches are prevalent. We provide novel insights into individuals’ awareness, perception, and responses to breaches that affect them through two online surveys: a main survey (n = 413) in which we presented participants with up to three breaches that affected them, and a follow-up survey (n = 108) in which we investigated whether the main study participants followed through with their intentions to act. Overall, 73% of participants were affected by at least one breach, but participants were unaware of 74% of breaches affecting them. Although some reported intention to take action, most participants believed the breach would not impact them. We also found a sizable intention-behavior gap. Participants did not follow through with their intention when they were apathetic about breaches, considered potential costs, forgot, or felt resigned about taking action. Our findings suggest that breached organizations should be held accountable for more proactively informing and protecting affected consumers.
Legal frameworks rely on users to make an informed decision about data collection, e.g., by accepting or declining the use of tracking technologies. In practice, however, users hardly interact with tracking consent notices on a deliberate website per website level, but usually accept or decline optional tracking technologies altogether in a habituated behavior.We explored the potential of three different nudge types (color highlighting, social cue, timer) and default settings to interrupt this auto-response in an experimental between-subject design with 167 participants.We did not find statistically significant differences regarding the buttons clicked. Our results showed that opt-in default settings significantly decrease tracking technology use acceptance rates. These results are a first step towards understanding the effects of different nudging concepts on users’ interaction with tracking consent notices.