この勉強会は終了しました。ご参加ありがとうございました。
Dark patterns are malicious UI design strategies that nudge users towards decisions going against their best interests. To create technical countermeasures against them, dark patterns must be automatically detectable. While researchers have devised algorithms to detect some patterns automatically, there has only been little work to use obtained results to technically counter the effects of dark patterns when users face them on their devices.
To address this, we tested three visual countermeasures against 13 common dark patterns in an interactive lab study. The countermeasures we tested either (a) highlighted and explained the manipulation, (b) hid it from the user, or (c) let the user switch between the original view and the hidden version. From our data, we were able to extract multiple clusters of dark patterns where participants preferred specific countermeasures for similar reasons. To support creating effective countermeasures, we discuss our findings with a recent ontology of dark patterns.
Responsible design of AI systems is a shared goal across HCI and AI communities. Responsible AI (RAI) tools have been developed to support practitioners to identify, assess, and mitigate ethical issues during AI development. These tools take many forms (e.g., design playbooks, software toolkits, documentation protocols). However, research suggests that use of RAI tools is shaped by organizational contexts, raising questions about how effective such tools are in practice. To better understand how RAI tools are—and might be—evaluated, we conducted a qualitative analysis of 37 publications that discuss evaluations of RAI tools. We find that most evaluations focus on usability, while questions of tools’ effectiveness in changing AI development are sidelined. While usability evaluations are an important approach to evaluate RAI tools, we draw on evaluation approaches from other fields to highlight developer- and community-level steps to support evaluations of RAI tools’ effectiveness in shaping AI development practices and outcomes.
This paper examines the pervasiveness of consequentialist thinking in human-computer interaction (HCI), and forefronts the value of non-consequential, dialectical activities in human life. Dialectical activities are human endeavors in which the value of the activity is intrinsic to itself, including being a good friend or parent, engaging in art-making or music-making, conducting research, and so on. I argue that computers—the ultimate consequentialist machinery for reliably transforming inputs into outputs—cannot be the be-all and end-all for promoting human values rooted in dialectical activities. I examine how HCI as a field of study might reconcile the consequentialist machines we have with the dialectical activities we value, and propose computational ecosystems as a vision for HCI that makes proper space for dialectical activities.
Design and technology practitioners are increasingly aware of the ethical impact of their work practices, desiring tools to support their ethical awareness across a range of contexts. In this paper, we report on findings from a series of six co-creation workshops with 26 technology and design practitioners that supported their creation of a bespoke ethics-focused action plan. Using a qualitative content analysis and thematic analysis approach, we identified a range of roles and process moves that practitioners and design students with professional experience employed and illustrate the interplay of these elements that impacted the creation of their action plan and revealed aspects of their ethical design complexity. We conclude with implications for supporting ethics in socio-technical practice and opportunities for the further development of methods that support ethical engagement and are resonant with the realities of practice.
Subscribing to online services is typically a straightforward process, but cancelling them can be arduous and confusing --- causing many to resign and continue paying for services they no longer use. Making the cancellation process intentionally difficult is recognized as a dark pattern called Roach Motel.
This paper characterizes the subscription and cancellation flows of popular news websites from four different countries and discusses them in the context of recent regulatory changes.
We study the design features that make it difficult to cancel a subscription and find several cancellation flows that feature intentional barriers, such as forcing users to call a representative or type in a phrase.
Further, we find many subscription flows that do not adequately inform users about recurring charges.
Our results point to a growing need for effective regulation of designs that trick, coerce, or manipulate users into paying for subscriptions they do not want.