With nearly two billion users, social media Stories—an ephemeral format of sharing—are increasingly popular and projected to overtake sharing via public feeds. Sharing via Stories differs from Feeds sharing by removing the visible feedback (e.g. "likes" and "comments") which has come to characterize social media. Given the salience of responses visibility to self-presentation and relational maintenance in social media literature, we conducted semi-structured interviews (N = 22) to explore how people understand these processes when using Stories. We find that users have lower expectations for responses with Stories and experience lower pressure for self-presentation. This fosters more frequent sharing and a sense of daily connectedness, which strong ties can find valuable. Finally, the act of viewing takes on new significance of signaling attention when made known to the sharer. Our findings point to the importance of effort and attention in understanding responses on social media.
Cybersecurity warnings are frequently ignored or misinterpreted by even experienced adults. While studies have been conducted to examine warning design for adults, there is little data to establish recommendations for children. We conducted user studies with 22 children (ages 10-12) and with 22 adults. We compare their risk perception of warning design parameters (signal colors, symbols, words) via card sorting and ranking activities followed by interviews. While our findings suggest similarities in how both groups interpret the design parameters (e.g., red, skull, and fatal convey danger), we also uncovered potential concerns with items currently used as security indicators (e.g., both groups had mixed interpretations of the open lock and police officer symbols). Individual risk perception, particularly for children, appears dependent on personal preferences and experience. Our findings suggest implications and future research directions for the design of cybersecurity warnings for children.
New consent management platforms (CMPs) have been introduced to the web to conform with the EU's General Data Protection Regulation, particularly its requirements for consent when companies collect and process users' personal data. This work analyses how the most prevalent CMP designs affect people's consent choices. We scraped the designs of the five most popular CMPs on the top 10,000 websites in the UK (n=680). We found that dark patterns and implied consent are ubiquitous; only 11.8% meet our minimal requirements based on European law. Second, we conducted a field experiment with 40 participants to investigate how the eight most common designs affect consent choices. We found that notification style (banner or barrier) has no effect; removing the opt-out button from the first page increases consent by 22-23 percentage points; and providing more granular controls on the first page decreases consent by 8-20 percentage points. This study provides an empirical basis for the necessary regulatory action to enforce the GDPR, in particular the possibility of focusing on the centralised, third-party CMP services as an effective way to increase compliance.
For the past 20 years, researchers have investigated the use of eye tracking in security applications. We present a holistic view on gaze-based security applications. In particular, we canvassed the literature and classify the utility of gaze in security applications into a) authentication, b) privacy protection, and c) gaze monitoring during security critical tasks. This allows us to chart several research directions, most importantly 1) conducting field studies of implicit and explicit gaze-based authentication due to recent advances in eye tracking, 2) research on gaze-based privacy protection and gaze monitoring in security critical tasks which are under-investigated yet very promising areas, and 3) understanding the privacy implications of pervasive eye tracking. We discuss the most promising opportunities and most pressing challenges of eye tracking for security that will shape research in gaze-based security applications for the next decade.
Computer users commonly experience interaction anomalies, such as the text cursor jumping to another location in a document, perturbed mouse pointer motion, or a disagreement between tactile input and touch screen location. These anomalies impair interaction and require the user to take corrective measures, such as resetting the text cursor or correcting the trajectory of the pointer to reach a desired target. Impairments can result from software bugs, physical hardware defects, and extraneous input. However, some designs alter the course of interaction through covert impairments, anomalies introduced intentionally and without the user's knowledge. There are various motivations for doing so rooted in disparate fields including biometrics, electronic voting, and entertainment. We examine this kind of deception by systematizing four different ways computer interaction may become impaired and three different goals of the designer, providing insight to the design of systems that implement covert impairments.