Research in HCI has shown a growing interest in unethical design practices across numerous domains, often referred to as ``dark patterns''. There is, however, a gap in related literature regarding social networking services (SNSs). In this context, studies emphasise a lack of users' self-determination regarding control over personal data and time spent on SNSs. We collected over 16 hours of screen recordings from Facebook's, Instagram's, TikTok's, and Twitter's mobile applications to understand how dark patterns manifest in these SNSs. For this task, we turned towards HCI experts to mitigate possible difficulties of non-expert participants in recognising dark patterns, as prior studies have noticed. Supported by the recordings, two authors of this paper conducted a thematic analysis based on previously described taxonomies, manually classifying the recorded material while delivering two key findings: We observed which instances occur in SNSs and identified two strategies – engaging and governing – with five dark patterns undiscovered before.
https://doi.org/10.1145/3544548.3580695
Doomsurfing, doomscrolling or zombie scrolling. These new additions to the tech vocabulary have become part of our everyday routine, scrolling endlessly through social media feeds. Furthermore, some users report a sense of compulsion, a decrease in mental wellbeing and an increased sense of distraction. A common complaint among users harks back to the Facebook newsfeed. In a field experiment with real Facebook users n=138, we investigate the difference between a strict newsfeed diet (where the newsfeed is automatically reduced to a minimum) and self-regulated newsfeed diet (where the newsfeed is reduced, but users can then manage its content). Our results indicate that both of these newsfeed diets are effective at reducing the time spent on Facebook's platform (-64% for the strict diet, -39% for the self-regulated diet). Our findings also suggest that these design interventions come with positive and negative user experiences such as increased self-awareness and fear of missing out (FOMO).
https://doi.org/10.1145/3544548.3581187
Sleep plays a paramount role in maintaining healthy bodily functioning. Yet, poor sleep is an increasingly prevalent global health concern. Most current sleep technology tracks sleep, but how to design for promoting sleep is relatively underexplored. We highlight the potential of employing closed-loop systems for promoting sleep onset and explore this through the design and study of “Dozer”, a closed-loop beanie that accelerates sleep onset through auditory and electrical brain stimulation after detecting drowsiness in EEG. In an in-the-wild study, participant interviews revealed three UX themes (closed-loop neurocentric agency, awareness of hardware, and awareness of feedback), which ultimately suggested that participants fell asleep in spite of Dozer, rather than through its assistance. We interpret these results and provide actionable design tactics to inform the design of closed-loop sleep systems moving forward. We hope this work gives rise to a deeper understanding of designing closed techno-physiological loops.
https://doi.org/10.1145/3544548.3581044
Many tech companies exploit psychological vulnerabilities to design digital interfaces that maximize the frequency and duration of user visits. Consequently, users often report feeling dissatisfied with time spent on such services. Prior work has developed typologies of damaging design patterns (or dark patterns) that contribute to financial and privacy harms, which has helped designers to resist these patterns and policymakers to regulate them. However, we are missing a collection of similar problematic patterns that lead to attentional harms. To close this gap, we conducted a systematic literature review for what we call 'attention capture damaging patterns' (ACDPs). We analyzed 43 papers to identify their characteristics, the psychological vulnerabilities they exploit, and their impact on digital wellbeing. We propose a definition of ACDPs and identify eleven common types, from Time Fog to Infinite Scroll. Our typology offers technologists and policymakers a common reference to advocate, design, and regulate against attentional harms.
https://doi.org/10.1145/3544548.3580729
YouTube has many features, such as homepage recommendations, that encourage users to explore its vast library of videos. However, when users visit YouTube with a specific intention, e.g., learning how to program in Python, these features to encourage exploration are often distracting. Prior work has innovated 'commitment interfaces' that restrict social media but finds that they often indiscriminately block needed content. In this paper, we describe the design, development, and evaluation of an 'adaptable commitment interface,' the SwitchTube mobile app, in which users can toggle between two interfaces when watching YouTube videos: Focus Mode (search-first) and Explore Mode (recommendations-first). In a three-week field deployment with 46 US participants, we evaluate how the ability to switch between interfaces affects user experience, finding that it provides users with a greater sense of agency, satisfaction, and goal alignment. We conclude with design implications for how adaptable commitment interfaces can support digital wellbeing.
https://doi.org/10.1145/3544548.3580703
Deceptive design patterns (known as dark patterns) are interface characteristics which modify users' choice architecture to gain users' attention, data, and money. Deceptive design patterns have yet to be documented in safety technologies despite evidence that designers of safety technologies make decisions that can powerfully influence user behavior. To address this gap, we conduct a case study of the Citizen app, a commercially available technology which notifies users about local safety incidents. We bound our study to Atlanta and triangulate interview data with an analysis of the user interface. Our results indicate that Citizen heightens users’ anxiety about safety while encouraging the use of profit-generating features which offer security. These findings contribute to an emerging conversation about how deceptive design patterns interact with sociocultural factors to produce \textit{deceptive infrastructure}. We propose the need to expand an existing taxonomy of harm to include \textit{emotional load} and \textit{social injustice} and offer recommendations for designers interested in dismantling the deceptive infrastructure of safety technologies.
https://doi.org/10.1145/3544548.3581258