To understand the underlying process of users' information disclosure decisions, scholars often use either the privacy calculus framework or refer to heuristic shortcuts. It is unclear whether the decision process varies by age. Therefore, using these common frameworks, we conducted a web-based experiment with 94 participants, who were younger (ages 19-22) or older (65+) adults, to understand how perceived app trust, sensitivity of the data, and benefits of disclosure influence users disclose decisions. Younger adults were more likely to change their perception of data sensitivity based on trust, while older adults were more likely to disclose information based on perceived benefits of disclosure. These results suggest older adults made more rationally calculated decisions than younger adults, who made heuristic decisions based on app trust. Our findings negate the mainstream narrative that older adults are less privacy-conscious than younger adults; instead, older adults weigh the benefits and risks of information disclosure.
As children become frequent digital technology users, concerns about their digital privacy are increasing. To better understand how young children conceptualize data processing and digital privacy risks, we interviewed 26 children, 4 to 10 years old, from families with higher educational attainment recruited in a college town. Our child participants construed apps' and services' data collection and storage practices in terms of their benefits, both to themselves and for user safety, and characterized both data tracking and privacy violations as interpersonal rather than considering automated processes or companies as privacy threats. We identify four factors shaping these mental models and privacy risk perceptions: (1) surface-level visual cues, (2) past digital interactions involving data collection, (3) age and cognitive development, and (4) privacy-related experiences in non-digital contexts. We discuss our findings' design, educational, and public policy implications toward better supporting children in identifying and reasoning about digital privacy risks.
Teachers play a key role in educating children about digital security and privacy. They are often at the forefront, witnessing incidents, dealing with the consequences, and helping children handle the technology-related risks. However, little is reported about teachers' lived classroom experiences and their challenges in this regard. We conducted semi-structured interviews with 21 Canadian elementary school teachers to understand the risks teachers witness children aged 10--13 facing on digital media, teachers' mitigation strategies, and how prepared teachers are to help children. Our results show that teachers regularly help children deal with digital risks outside of teaching official curriculum, ranging from minor privacy violations to severe cases of cyberbullying. Most issues reported by teachers were the result of typical behaviours which became risky because they took place over digital media. We use the results to highlight implications for how elementary schools address digital security and privacy.
In this work, we design and evaluate LociMotion, a training interface to learn a strong authentication secret in a single session. LociMotion automatically takes a random password with twelve lowercase letters (56-bit entropy) to generate the training interface. It first leverages users' spatial and visual (declarative) memory by showing them a video clip based on the method of loci, and then consolidates the learning process by having them play a computer game that leverages their motor (procedural) memory. The results of a memorability study with 300 participants showed that LociMotion had a significantly higher recall success rate than a control condition. A second study with 200 participants demonstrated the effectiveness of LociMotion over a period of time (99%, 96%, and 81% recall success rates after 1, 4, and 18 days, respectively). LociMotion offers an alternative to the spaced repetition technique, as it does not require dozens of training sessions.
Implicit authentication (IA) has recently become a popular approach for providing physical security on smartphones. It relies on behavioral traits (e.g., gait patterns) for user identification, instead of biometric data or knowledge of a PIN. However, it is not yet known whether users can understand the semantics of this technology well enough to use it properly. We bridge this knowledge gap by evaluating how Android's Smart Lock (SL), which is the first widely deployed IA solution on smartphones, is understood by its users. We conducted a qualitative user study (N=26) and an online survey (N=331). The results suggest that users often have difficulty understanding SL semantics, leaving them unable to judge when their phone would be (un)locked. We found that various aspects of SL, such as its capabilities and its authentication factors, are confusing for the users. We also found that depth of smartphone adoption is a significant antecedent of SL comprehension.
Static analysis tools (SATs) have the potential to assist developers in finding and fixing vulnerabilities in the early stages of software development, requiring them to be able to understand and act on tools' notifications. To understand how helpful such SAT guidance is to developers, we ran an online experiment (N=132) where participants were shown four vulnerable code samples (SQL injection, hard-coded credentials, encryption, and logging sensitive data) along with SAT guidance, and asked to indicate the appropriate fix. Participants had a positive attitude towards both SAT notifications and particularly liked the example solutions and vulnerable code. Seeing SAT notifications also led to more detailed open-ended answers and slightly improved code correction answers. Still, most SAT (SpotBugs 67%, SonarQube 86%) and Control (96%) participants answered at least one code-correction question incorrectly. Prior software development experience, perceived vulnerability severity, and answer confidence all positively impacted answer accuracy.
Crypto-assets are unique in tying financial wealth to the secrecy of private keys. Prior empirical work has attempted to study end-user security from both technical and organizational perspectives. However, the link between individuals' risk perceptions and security behavior was often obscured by the heterogeneity of the subjects in small samples. This paper contributes quantitative results from a survey of 395 crypto-asset users recruited by a novel combination of deep and broad sampling. The analysis accounts for heterogeneity with a new typology that partitions the sample in three robust clusters - cypherpunks, hodlers, and rookies - using five psychometric constructs. The constructs originate from established behavioral theories with items purposefully adapted to the domain. We demonstrate the utility of this typology in better understanding users' characteristics and security behaviors. These insights inform the design of crypto-asset solutions, guide risk communication, and suggest directions for future digital currencies.
Software development teams are responsible for making and implementing software design decisions that directly impact end-user privacy, a challenging task to do well. Privacy Champions---people who strongly care about advocating privacy---play a useful role in supporting privacy-respecting development cultures. To understand their motivations, challenges, and strategies for protecting end-user privacy, we conducted 12 interviews with Privacy Champions in software development teams. We find that common barriers to implementing privacy in software design include: negative privacy culture, internal prioritisation tensions, limited tool support, unclear evaluation metrics, and technical complexity. To promote privacy, Privacy Champions regularly use informal discussions, management support, communication among stakeholders, and documentation and guidelines. They perceive code reviews and practical training as more instructive than general privacy awareness and on-boarding training. Our study is a first step towards understanding how Privacy Champions work to improve their organisation's privacy approaches and improve the privacy of end-user products.
Security technology often follows a systems design approach that focuses on components instead of users. As a result, the users' needs and values are not sufficiently addressed, which has implications on security usability. In this paper, we report our lessons learned from applying a user-centered security design process to a well-understood security usability challenge, namely key authentication in secure instant messaging. Users rarely perform these key authentication ceremonies, which makes their end-to-end encrypted communication vulnerable. Our approach includes collaborative design workshops, an expert evaluation, iterative storyboard prototyping, and an online evaluation.
While we could not demonstrate that our design approach resulted in improved usability or user experience, we found that user-centered prototypes can increase the users' comprehension of security implications. Hence, prototypes based on users' intuitions, needs, and values are useful starting points for approaching long-standing security challenges. Applying complementary design approaches may improve usability and user experience further.
Judging the safety of a URL is something that even security experts struggle to do accurately without additional information. In this work, we aim to make experts' tools accessible to non-experts and assist general users in judging the safety of URLs by providing them with a usable report based on the information professionals use. We designed the report by iterating with 8 focus groups made up of end users, HCI experts, and security experts to ensure that the report was usable as well as accurately interpreted the information. We also conducted an online evaluation with 153 participants to compare different report-length options. We find that the longer comprehensive report allows users to accurately judge URL safety (93% accurate) and that summaries still provide benefit (83% accurate) compared to domain highlighting (65% accurate).
Modern cars include a vast array of computer systems designed to remove the burden on drivers and enhance safety. As cars are evolving towards autonomy and taking over control, e.g. in the form of autopilots, it becomes harder for drivers to pinpoint the root causes of a car's malfunctioning. Drivers may need additional information to assess these ambiguous situations correctly. However, it is yet unclear which information is relevant and helpful to drivers in such situations.
Hence, we conducted a mixed-methods online survey N=60 on Amazon MTurk where we exposed participants to two security- and safety-critical situations with one of three different explanations.
We applied Thematic and Correspondence Analysis to understand which factors in these situations moderate drivers’ information demand. We identified a fundamental information demand across scenarios that is expanded by error-specific information types. Moreover, we found that it is necessary to communicate error sources, since drivers might not be able to identify them correctly otherwise. Thereby, malicious intrusions are typically perceived as more critical than technical malfunctions.