Online status indicators (OSIs) improve online communication by helping users convey and assess availability, but they also let users infer potentially sensitive information about one another. We surveyed 200 smartphone users to understand the extent to which users are aware of information shared via OSIs and the extent to which this shapes their behavior. Despite familiarity with OSIs, participants misunderstand many aspects of OSIs, and they describe carefully curating and seeking to control their self-presentation via OSIs. Some users further report leveraging OSI-conveyed information for problematic and malicious purposes. Drawing on existing constructs of app dependence (i.e., when users contort their behavior to meet an app's demands) and app enablement (i.e., when apps enable users to engage in behaviors they feel good about), we demonstrate that current OSI design patterns promote app dependence, and we call for a shift toward OSI designs that are more enabling for users.
Decaying representations gradually make social media content less visible to readers over time, which can help users disassociate from past online activities. We explore whether shrinking, one decaying representation, influences managers' assessments and simulated hiring decisions of job candidates, compared to seeing a full profile or an empty profile with no posts. Our 3 x 2 between-subjects crowdsourced survey (N = 360 US managers) shows that shrunk or empty profiles led to more positive decisions than profiles in their original full format. However, shrunk profiles also further contributed to more positive impressions of the candidates. Shrinking did not help the candidate of either gender more than the other and demographics of managers had limited impact on their assessment. Further, our managers regularly search job candidates' social media profiles in real life, suggesting that shrinking could support users' privacy. We finally present implications for individuals' privacy on social media.
Many people share online accounts, even in situations where high privacy and security are expected. Naturally, the sharing of these accounts does not endure forever. This paper reports the privacy and security challenges that people experience when they stop online account sharing. We conducted semi-structured interviews with 25 participants who stopped sharing at least one online account in the 12 months preceding the study. Our results suggest that users experience cognitive and psychosocial burdens when ending account sharing. We offer suggestions for how to improve the design of online accounts to support users better when they end account sharing.
Digital resources are often collectively owned and shared by small social groups (e.g., friends sharing Netflix accounts, roommates sharing game consoles, families sharing WhatsApp groups). Yet, little is known about (i) how these groups jointly navigate cybersecurity and privacy (S&P) decisions for shared resources, (ii) how shared experiences influence individual S&P attitudes and behaviors, and (iii) how well existing S&P controls map onto group needs. We conducted group interviews and a supplemental diary study with nine social groups (n=34) of varying relationship types. We identified why, how and what resources groups shared, their jointly construed threat models, and how these factors influenced group strategies for securing shared resources. We also identified missed opportunities for cooperation and stewardship among group members that could have led to improved S&P behaviors, and found that existing S&P controls often fail to meet the needs of these small social groups.
Anonymous networks intended to enhance privacy and evade censorship are also being exploited for abusive activities. Technical schemes have been proposed to selectively revoke the anonymity of abusive users, or simply limit them from anonymously accessing online service providers. We designed an empirical survey study to assess the effects of deploying these schemes on 75 users of the Tor anonymous network. We evaluated proposed schemes based on examples of the intended or abusive use cases they may address, their technical implementation and the types of entities responsible for enforcing them. Our results show that revocable anonymity schemes would particularly deter the intended uses of anonymous networks. We found a lower reported decrease in usage for schemes addressing spam than those directly compromising free expression. However, participants were concerned that all technical mechanisms for addressing anonymous abuses could be exploited beyond their intended goals (51.7\%) to harm users (43.8\%). Participants were distrustful of the enforcing entities involved (43.8\%) and concerned about being unable to verify (49.3\%) how particular mechanisms were applied.