Digital technologies have positively transformed society, but they have also led to undesirable consequences not anticipated at the time of design or development. We posit that insights into past undesirable consequences can help researchers and practitioners gain awareness and anticipate potential adverse effects. To test this assumption, we introduce BLIP, a system that extracts real-world undesirable consequences of technology from online articles, summarizes and categorizes them, and presents them in an interactive, web-based interface. In two user studies with 15 researchers in various computer science disciplines, we found that BLIP substantially increased the number and diversity of undesirable consequences they could list in comparison to relying on prior knowledge or searching online. Moreover, BLIP helped them identify undesirable consequences relevant to their ongoing projects, made them aware of undesirable consequences they “had never considered,” and inspired them to reflect on their own experiences with technology.
https://doi.org/10.1145/3613904.3642054
Consumers increasingly interact with workers through technology-mediated marketplaces (TMMs)—environments where third-party companies manage interactions, control information, and constrain behavioral choices. We argue that opacity in how TMMs operate can make it difficult for consumers to judge what is fair when interacting with other economic actors. To better understand how consumers perceive and act on fairness in TMMs, we examine the practice of tipping—a consumer behavior in the United States that is strongly associated with assessments of fairness. Through interviews with consumers, we find three distinct ways that consumers discuss fairness in tipping in third-party food delivery: fairness as supporting a living wage, fairness as reciprocity, and fairness in distribution of payments. We discuss how TMMs codify economic interactions and change consumers’ social meaning of a tip, how consumers perceive an obligation to tip drivers differently in TMMs, and how TMMs alter information consumers use to determine accountability.
https://doi.org/10.1145/3613904.3642678
The recent success of Natural Language Processing (NLP) relies heavily on pre-trained text representations such as word embeddings. However, pre-trained text representations may exhibit social biases and stereotypes, e.g., disproportionately associating gender with occupations. Though prior work presented various bias detection algorithms, they are limited to pre-defined biases and lack effective interaction support. In this work, we propose STILE, an interactive system that supports mixed-initiative bias discovery and debugging in pre-trained text representations. STILE provides users the flexibility to interactively define and customize biases to detect based on their interests. Furthermore, it provides a bird’s-eye view of detected biases in a Chord diagram and allows users to dive into the training data to investigate how a bias was developed. Our lab study and expert review confirm the usefulness and usability of STILE as an effective aid in identifying and understanding biases in pre-trained text representations.
https://doi.org/10.1145/3613904.3642111
Deceptive and coercive design practices are increasingly used by companies to extract profit, harvest data, and limit consumer choice. Dark patterns represent the most common contemporary amalgamation of these problematic practices, connecting designers, technologists, scholars, regulators, and legal professionals in transdisciplinary dialogue. However, a lack of universally accepted definitions across the academic, legislative, practitioner, and regulatory space has likely limited the impact that scholarship on dark patterns might have in supporting sanctions and evolved design practices. In this paper, we seek to support the development of a shared language of dark patterns, harmonizing ten existing regulatory and academic taxonomies of dark patterns and proposing a three-level ontology with standardized definitions for 64 synthesized dark pattern types across low-, meso-, and high-level patterns. We illustrate how this ontology can support translational research and regulatory action, including transdisciplinary pathways to extend our initial types through new empirical work across application and technology domains.
https://doi.org/10.1145/3613904.3642436
Current dark pattern research tells designers what not to do, but how do they know what to do? In contrast to prior approaches that focus on patterns to avoid and their underlying principles, we present a framework grounded in positive expected behavior against which deviations can be judged. To articulate this expected behavior, we use concepts—abstract units of functionality that compose applications. We define a design as dark when its concepts violate users' expectations, and benefit the application provider at the user's expense. Though user expectations can differ, users tend to develop common expectations as they encounter the same concepts across multiple applications, which we can record in a concept catalog as standard concepts. We evaluate our framework and concept catalog through three studies, illustrating their ability to describe existing dark patterns, evaluate nuanced designs, and document common application functionality.
https://doi.org/10.1145/3613904.3642781