この勉強会は終了しました。ご参加ありがとうございました。
Phishing attacks are a main threat to organizations and individuals. Current widespread defenses based on spam filters and domain blacklisting are unfortunately insufficient.
Prior work identifies phishing reporting as a key, largely untapped resource to mitigate phishing threats. Yet, its practice suffers from very low reporting rates and generally too low an uptake from users. Whereas it is known that phishing reporting behavior is affected by a number of `human factors', a comprehensive view of the different theories and their effects on (intent to) report is not yet developed. To address this gap, we evaluate theories and factors analyzed in the extant literature, build a cohesive theoretical view of their effects and constructs, and develop, model, and empirically evaluate (by means of an online questionnaire, n=284) the resulting hypothesis structure. We discuss both theoretical implications of our findings and research directions for practice at a research and organizational level.
The use of smartphones has become an integral part of everyone’s lives. Due to the ubiquitous nature and multiple functionalities of smartphones, the data handled by these devices are sensitive in nature. Despite the measures companies take to protect users’ data, research has shown that people do not take the necessary actions to stay safe from security and privacy threats. Persuasive games have been implemented across various domains to motivate people towards a positive behaviour change. Even though persuasive games could be effective, research has shown that the one-size-fits-all approach to designing persuasive games might not be as effective as the tailored versions of the game. This paper presents the design and evaluation of a persuasive game to improve user awareness about smartphone security and privacy tailored to the user’s motivational orientation using Regulatory Focus Theory. From the results of our mixed-methods in-the-wild study of 102 people followed by a one-on-one interview of 25 people, it is evident that the tailored version of the persuasive game performed better than the non-tailored version of the game towards improving users’ secure smartphone behaviour. We contribute to the broader HCI community by offering design suggestions and the benefits of tailoring persuasive games.
In today's digitized societies, phishing attacks are a security threat with damaging consequences. Organizations remain vulnerable to phishing attacks, and it is not clear how the work context influences people’s perceptions and behaviors related to phishing attempts. I investigate (1) how contextual factors influence reactions to a spear-phishing attempt, (2) why people report or do not report phishing attempts, (3) which opportunities for security-enhancing interventions people identify. I use an in-situ deception methodology to observe participants (N=14) in their realistic work environment. I triangulate observational and self-reported data to obtain rich qualitative insights into participants’ emotions, thoughts, and actions when receiving a targeted phishing email. I find that task, IT, internal and social context play an important role. The email's request being aligned with expectations and perceived time pressure when responding to emails were associated with insecure behavior. The social context positively influenced phishing detection, but ``phished'' participants did not tell anyone.
Preventing workplace phishing depends on the actions of every employee, regardless of cybersecurity expertise. Based on 24 semi-structured interviews with mid-career office workers (70.8% women, averaging 44 years old) at two U.S. universities, we found that less than 21% of our participants had any formal anti-phishing training. Much of what our participants know about phishing comes from informal sources that emphasize “tips” and "tricks" like those found in conversations with friends, news stories, newsletters, social media, and podcasts. These informal channels provide opportunities for IT professionals wishing to enhance employees’ anti-phishing awareness by better aligning the delivery of expert advice with employees’ current practices and desires. We provide four recommendations designed to embrace "guerrilla learning" by distributing anti-phishing educational resources across the workplace and workday in part to encourage the delivery of more accurate information in more informal and incidental ways, and greater dialogue between anti-phishing training instructors and learners.
Humans can play a decisive role in detecting and mitigating cyber attacks if they possess sufficient cybersecurity skills and knowledge. Realizing this potential requires effective cybersecurity training. Cyber range exercises (CRXs) represent a novel form of cybersecurity training in which trainees can experience realistic cyber attacks in authentic environments. Although evaluation is undeniably essential for any learning environment, it has been widely neglected in CRX research. Addressing this issue, we propose a taxonomy-based framework to facilitate a comprehensive and structured evaluation of CRXs. To demonstrate the applicability and potential of the framework, we instantiate it to evaluate Iceberg CRX, a training we recently developed to improve cybersecurity education at our university. For this matter, we conducted a user study with 50 students to identify both strengths and weaknesses of the CRX.
Strong end-user security practices benefit both the user and hosting platform, but it is not well understood how companies communicate with their users to encourage these practices. This paper explores whether web companies and their platforms use different levels of language formality in these communications and tests the hypothesis that higher language formality leads to users' increased intention to comply. We contribute a dataset and systematic analysis of 1,817 English language strings in web security and privacy interfaces across 13 web platforms, showing strong variations in language. An online study with 512 participants further demonstrated that people perceive differences in the language formality across platforms and that a higher language formality is associated with higher self-reported intention to comply. Our findings suggest that formality can be an important factor in designing effective security and privacy prompts. We discuss implications of these results, including how to balance formality with platform language style. In addition to being the first piece of work to analyze language formality in user security, these findings provide valuable insights into how platforms can best communicate with users about account security.