In this paper, we investigate why users of private browsing mode misunderstand the benefits and limitations of private browsing. We design and conduct a three-part study: (1) an analytic evaluation of the user interface of private mode in different browsers; (2) a qualitative user study to explore user mental models of private browsing; (3) a participatory design study to investigate why existing browser disclosures, the in-browser explanations of private mode, do not communicate the actual protection of private mode.We find the user interface of private mode in different browsers violated well-established design guidelines and heuristics. Further, most participants had incorrect mental models of private browsing, influencing their understanding and usage of private mode. We also find existing browser disclosures did not explain the primary security goal of private mode. Drawing from the results of our study, we extract a set of recommendations to improve the design of disclosures.
Recent years have seen growing organizational adoption of two-factor authentication as organizations seek to limit the damage caused by password breaches. However, research on the user experience of two-factor authentication in a real-world setting is relatively scant. To fill this gap, we conducted multiple waves of an online survey of users at a large public university during its multi-phase rollout of mandatory two-factor authentication for faculty, staff, and students. In addition, we examined multiple months of logs of all authentication events at the university. We found no significant changes in user experience and acceptance of two-factor authentication when it was mandatory for select systems that dealt with sensitive information. However, these factors degraded when users were forced to use two-factor authentication for logging into every single university resource. Our findings can serve as important guidance for the implementation of two-factor authentication in organizations in a way that can help achieve a balance between security and user experience.
A large proportion of email messages in an average Internet user's inbox are unwanted commercial messages from mailing lists, bots, and so on. Although such messages often include instructions to unsubscribe, people still struggle with stopping unwanted email. We investigated the user experience of unsubscribing from unwanted email messages by recruiting 18 individuals for via a lab study followed by semi-structured interviews. Based on unsubscribing practices of the study participants, we synthesized eight common unsubscription mechanisms and identified the corresponding user experience challenges. We further uncovered alternative practices aimed at circumventing the need to unsubscribe. Our findings reveal frustration with the prevailing options for limiting access to the self by managing email boundaries. We apply our insight to offer design suggestions that could help commercial providers improve the user experience of unsubscribing and provide users more control over the email they receive.
Billions of robocalls annually have undermined the public's trust in the entire phone system. New functionality, called STIR/SHAKEN (S/S), hopes to help fix this issue by detecting whether a call is coming from the number it says it is. However, due to the nature of the system, at first only a portion of calls would go through the S/S system. This led us to question whether presenting this information would confuse users more than help. In this paper, we detail the results of online surveys, in-person interviews, and a lab-based simulation. The research recommends "Valid Number" for the label on the display and found that even with only 30% of calls being validated, S/S increased trust, answer frequency and consumer satisfaction. Based on these results, the launch of S/S could positively affect the current phone system and re-establish consumer trust.
We conducted an in-lab user study with 24 participants to explore the usefulness and usability of privacy choices offered by websites. Participants were asked to find and use choices related to email marketing, targeted advertising, or data deletion on a set of nine websites that differed in terms of where and how these choices were presented. They struggled with several aspects of the interaction, such as selecting the correct page from a site's navigation menu and understanding what information to include in written opt-out requests. Participants found mechanisms located in account settings pages easier to use than options contained in privacy policies, but many still consulted help pages or sent email to request assistance. Our findings indicate that, despite their prevalence, privacy choices like those examined in this study are difficult for consumers to exercise in practice. We provide design and policy recommendations for making these website opt-out and deletion choices more useful and usable for consumers.