Dark patterns are deceptive strategies that recent work in human-computer interaction (HCI) has captured throughout digital domains, including social networking sites (SNSs). While research has identified difficulties among people to recognise dark patterns effectively, few studies consider vulnerable populations and their experience in this regard, including people with attention deficit hyperactivity disorder (ADHD), who may be especially susceptible to attention-grabbing tricks. Based on an interactive web study with 135 participants, we investigate SNS users' ability to recognise and avoid dark patterns by comparing results from participants with and without ADHD. In line with prior work, we noticed overall low recognition of dark patterns with no significant differences between the two groups. Yet, ADHD individuals were able to avoid specific dark patterns more often. Our results advance previous work by understanding dark patterns in a realistic environment and offer insights into their effect on vulnerable populations.
Dark patterns are ubiquitous in digital systems, impacting users throughout their journeys on many popular apps and websites. While substantial efforts from the research community in the last five years have led to consolidated taxonomies and an ontology of dark patterns, most characterizations of these patterns have been focused on static images or isolated pattern types. In this paper, we leverage documents from a US Federal Trade Commission complaint describing dark patterns in Amazon Prime's "Iliad Flow," illustrating the interplay of dark patterns across a user journey. We use this case study to illustrate how dark patterns can be characterized and mapped over time, providing a sufficient audit trail and consistent application of dark patterns at high- and meso-level scales. We conclude by describing the groundwork for a methodology of Temporal Analysis of Dark Patterns (TADP) that allows for rigorous identification of dark patterns by researchers, regulators, and legal scholars.
Dark patterns (DPs) refer to unethical user interface designs that deceive users into making unintended decisions, compromising their privacy, safety, financial security, and more. Prior research has mainly focused on defining and classifying DPs, as well as assessing their impact on users, while legislative and technical efforts to mitigate them remain limited. Consequently, users are still exposed to DP risks, making it urgent to educate them on avoiding these harms. However, there has been little focus on developing educational interventions for DP awareness. This study addresses this gap by introducing DPTrek, an experiential learning (EL) platform that educates users through simulated real-world DP cases. Both qualitative and quantitative evaluations show the effectiveness of DPTrek in helping users identify and manage DPs. The study also offers insights for future DP education and research, highlighting challenges such as user-unfriendly taxonomies and the lack of practical mitigation solutions.
To protect consumer privacy, the California Consumer Privacy Act (CCPA) requires businesses to provide consumers with a straightforward way to opt out of the sale and sharing of their personal information. However, the control that businesses enjoy over the opt-out process allows them to impose hurdles on consumers aiming to opt out, including by employing dark patterns. Motivated by the enactment of the California Privacy Rights Act (CPRA), which strengthens the CCPA and explicitly forbids certain dark patterns in the opt-out process, we investigate how dark patterns are used in opt-out processes and assess their compliance with CCPA regulations. Our research on 330 CCPA-subject websites reveals that these websites employ a variety of dark patterns. Some of these patterns are explicitly prohibited under the CCPA; others seem to take advantage of legal loopholes.
Recent work has catalogued a variety of ``dark'' design patterns, including deception, that undermine user intent. We focus on deceptive ``placebo'' control settings for social media that do not work. While prior work reported that placebo controls increase feed satisfaction, we add to this body of knowledge by addressing possible placebo mechanisms, and potential side effects and confounds from the original study. Knowledge of these placebo mechanisms can help predict potential harms to users and prioritize the most problematic cases for regulators to pursue. In an online experiment, participants (N=762) browsed a Twitter feed with no control setting, a working control setting, or a placebo control setting. We found a placebo effect much smaller in magnitude than originally reported. This finding adds another objection to use of placebo controls in social media settings, while our methodology offers insights into finding confounds in placebo experiments in HCI.
Modern algorithmic recommendation systems seek to engage users through behavioral content-interest matching. While many platforms recommend content
based on engagement metrics, others like TikTok deliver interest-based content, resulting in recommendations perceived to be hyper-personalized compared to other platforms. TikTok's robust recommendation engine has led some users
to suspect that the algorithm knows users “better than they know themselves," but this is not always true. In this paper, we explore TikTok users’ perceptions of recommended content on their For You Page (FYP), specifically
calling attention to unwanted recommendations. Through qualitative interviews of 14 current and former TikTok users, we find themes of frustration with recommended content, attempts to rid themselves of unwanted content, and various degrees of success in eschewing such content. We discuss implications in the larger context of folk theorization and contribute concrete tactical and behavioral examples of \textit{algorithmic persistence}.