A computer vision system using low-resolution image sensors can provide intelligent services (e.g., activity recognition) but preserve unnecessary visual privacy information from the hardware level. However, preserving visual privacy and enabling accurate machine recognition have adversarial needs on image resolution. Modeling the trade-off of privacy preservation and machine recognition performance can guide future privacy-preserving computer vision systems using low-resolution image sensors. In this paper, using the at-home activity of daily livings (ADLs) as the scenario, we first obtained the most important visual privacy features through a user survey. Then we quantified and analyzed the effects of image resolution on human and machine recognition performance in activity recognition and privacy awareness tasks. We also investigated how modern image super-resolution techniques influence these effects. Based on the results, we proposed a method for modeling the trade-off of privacy preservation and activity recognition on low-resolution images.
https://doi.org/10.1145/3544548.3581425
Workplaces are increasingly adopting emotion AI, promising benefts to organizations. However, little is known about the perceptions and experiences of workers subject to emotion AI in the workplace. Our interview study with (n=15) US adult workers addresses this gap, finding that (1) participants viewed emotion AI as a deep privacy violation over the privacy of workers’ sensitive emotional information; (2) emotion AI may function to enforce workers’ compliance with emotional labor expectations, and that workers may engage in emotional labor as a mechanism to preserve privacy over their emotions; (3) workers may be exposed to a wide range of harms as a consequence of emotion AI in the workplace. Findings reveal the need to recognize and defne an individual right to what we introduce as emotional privacy, as well as raise important research and policy questions on how to protect and preserve emotional privacy within and beyond the workplace.
https://doi.org/10.1145/3544548.3580950
Increased use of technology in schools raises new privacy and security challenges for K-12 students---and harms such as commercialization of student data, exposure of student data in security breaches, and expanded tracking of students---but the extent of these challenges is unclear. In this paper, first, we interviewed 18 school officials and IT personnel to understand what educational technologies districts use and how they manage student privacy and security around these technologies. Second, to determine if these educational technologies are frequently endorsed across United States (US) public schools, we compiled a list of linked educational technology websites scraped from 15,573 K-12 public school/district domains and analyzed them for privacy risks. Our findings suggest that administrators lack resources to properly assess privacy and security issues around educational technologies even though they do pose potential privacy issues. Based on these findings, we make recommendations for policymakers, educators, and the CHI research community.
https://doi.org/10.1145/3544548.3580777
People regularly rely on social support from family, friends, and the public when mitigating security and privacy risks, even if mainstream technologies hardly support these interactions. In this paper, we evaluated Meerkat, a mobile application that allows users to receive support through screenshot capturing, marking, and messaging. In a field experiment (n = 65), we tested how Meerkat helps users face phishing attempts and examined it by receiving help from close social connections and community volunteers. Our findings show that while users could learn from both types of helpers, they were significantly more willing to rely on advice from close connections. We evaluate several criteria for successful support interactions, showing that learning is significantly correlated with specific properties of the support interaction, such as the length of the messages. We conclude the paper by discussing how our findings can be used to design community-based applications.
https://doi.org/10.1145/3544548.3581183
Challenge is the core element of digital games. The wide spectrum of physical, cognitive, and emotional challenge experiences provided by modern digital games can be evaluated subjectively using a questionnaire, the CORGIS, which allows for a post hoc evaluation of the overall experience that occurred during game play. Measuring this experience dynamically and objectively, however, would allow for a more holistic view of the moment-to-moment experiences of players. This study, therefore, explored the potential of detecting perceived challenge from physiological signals. For this, we collected physiological responses from 32 players who engaged in three typical game scenarios. Using perceived challenge ratings from players and extracted physiological features, we applied multiple machine learning methods and metrics to detect challenge experiences. Results show that most methods achieved a detection accuracy of around 80%. We discuss in-game challenge perception, challenge-related physiological indicators and AI-supported challenge detection to inform future work on challenge evaluation.
https://doi.org/10.1145/3544548.3581232
This paper investigates how trust towards service providers and the adoption of privacy controls belonging to two specific purposes (control over “sharing” vs. “usage” of data) vary based on users’ technical literacy. Towards that, we chose Google as the context and conducted an online survey across 209 Google users. Our results suggest that integrity and benevolence perceptions toward Google are significantly lower among technical participants than non-technical participants. While trust perceptions differ between non-technical adopters and non-adopters of privacy controls, no such difference is found among the technical counterparts. Notably, among the non-technical participants, the direction of trust affecting privacy control adoption is observed to be reversed based on the purpose of the controls. Using qualitative analysis, we extract trust-enhancing and dampening factors contributing to users' trusting beliefs towards Google's protection of user privacy. The implications of our findings for the design and promotion of privacy controls are discussed in the paper.
https://doi.org/10.1145/3544548.3581387