“What I’m interested in is something that violates the law”: Regulatory practitioner views on automated detection of deceptive design patterns
説明

Although deceptive design patterns are subject to growing regulatory oversight, enforcement races to keep up with the scale of the problem. One promising solution is automated detection tools, many of which are developed within academia. We interviewed nine experienced practitioners working within or alongside regulatory bodies to understand their work against deceptive design patterns, including the use of supporting tools and the prospect of automation. Computing technologies have their place in regulatory practice, but not as envisioned in research. For example, investigations require utmost transparency and accountability in all the activities we identify as accompanying dark pattern detection, which many existing tools cannot provide. Moreover, tools need to map interfaces to legal violations to be of use. We thus recommend conducting user requirement research to maximize research impact, supporting ancillary activities beyond detection, and establishing practical tech adoption pathways that account for the needs of both scientific and regulatory activities.

日本語まとめ
読み込み中…
読み込み中…
"Better Ask for Forgiveness than Permission": Practices and Policies of AI Disclosure in Freelance Work
説明

The growing use of AI applications among freelance workers is reshaping trust and relationships with clients. This paper investigates how both workers and clients perceive AI use and disclosure in the freelance economy through a three-stage study: interviews with workers and two survey studies with workers and clients. Findings first reveal a key expectation gap around disclosure: Workers often adopt passive disclosure practices, revealing AI use only when asked, as they assume clients can already detect it. Clients, however, are far less confident in recognizing AI-assisted work and prefer proactive disclosure. A second finding highlights the role of unclear or absent client AI policies, which leave workers consistently misinterpreting clients' expectations for AI use and disclosure. Together, these gaps point to the need for clearer guidelines and practices for AI disclosure. Insights extend beyond freelancing, offering implications for trust, accountability, and policy design in other AI-mediated work domains.

日本語まとめ
読み込み中…
読み込み中…
PolicyPad: Collaborative Prototyping of LLM Policies
説明

As LLMs gain adoption in high-stakes domains like mental health, domain experts are increasingly consulted to provide input into policies governing their behavior. From an observation of 19 policymaking workshops with 9 experts over 15 weeks, we identified opportunities to better support rapid experimentation, feedback, and iteration for collaborative policy design processes. We present PolicyPad, an interactive system that facilitates the emerging practice of LLM policy prototyping by drawing from established UX prototyping practices, including heuristic evaluation and storyboarding. Using PolicyPad, policy designers can collaborate on drafting a policy in real time while independently testing policy-informed model behavior with usage scenarios. We evaluate PolicyPad through workshops with 8 groups of 22 domain experts in mental health and law, finding that PolicyPad enhanced collaborative dynamics during policy design, enabled tight feedback loops, and led to novel policy contributions. Overall, our work paves expert-informed paths for advancing AI alignment and safety.

日本語まとめ
読み込み中…
読み込み中…
"Please, don’t kill the only model that still feels human": Understanding the #Keep4o Backlash
説明

When OpenAI replaced GPT-4o with GPT-5, it triggered the #Keep4o user resistance movement, revealing a conflict between rapid platform iteration and users' deep socio-emotional attachments to AI systems. This paper presents a phenomenon-driven, mixed-methods investigation of this conflict, analyzing 1,482 social media posts. Thematic analysis reveals that resistance stems from two core investments: instrumental dependency, where the AI is deeply integrated into professional workflows, and relational attachment, where users form strong parasocial bonds with the AI as a unique companion. Quantitative analysis further shows that the coercive deprivation of user choice was a key catalyst, transforming individual grievances into a collective, rights-based protest. This study illuminates an emerging form of socio-technical conflict in the age of generative AI. Our findings suggest that for AI systems designed for companionship and deep integration, the process of change—particularly the preservation of user agency—can be as critical as the technological outcome itself.

日本語まとめ
読み込み中…
読み込み中…
Principles of Safe AI Companions for Youth: Parent and Expert Perspectives
説明

AI companions are increasingly popular among teenagers, yet current platforms lack safeguards to address developmental risks and harmful normalization. Despite growing concerns, little is known about how parents and developmental psychology experts assess these interactions or what protections they consider necessary. We conducted 26 semi-structured interviews with parents and experts, who reviewed real-world youth–AI companion conversation snippets. We found that stakeholders assessed risks contextually, attending to factors such as youth maturity, AI character age, and how AI characters modeled values and norms. We also identified distinct logics of assessment: parent participants flagged single events, such as a mention of suicide or flirtation, as high risk, whereas expert participants looked for patterns over time, such as repeated references to self-harm or sustained dependence. Both groups proposed interventions, with parents favoring broader oversight and experts preferring cautious, crisis-only escalation paired with youth-facing safeguards. These findings provide directions for embedding safety into AI companion design.

日本語まとめ
読み込み中…
読み込み中…
"That's another doom I haven't thought about": A User Study on AI Labels as a Safeguard Against Image-Based Misinformation
説明

As generative AI is increasingly contributing to the spread of deceptively realistic misinformation, lawmakers have introduced regulations requiring the disclosure of AI-generated content. However, it is unclear if labels reduce the risk of users falling for AI-generated misinformation. To address this research gap, we study the effect of labels on users' perception and the implications of mislabeling, focusing on AI-generated images. We first explored users' opinions and expectations of labels using five focus groups. Although participants were wary of practical implementations, they considered labeling helpful in identifying AI-generated images and avoiding deception. Second, we conducted a survey with 1,354 participants to assess how labels affect users' ability to recognize misinformation. While labels reduced participants' belief in false claims supported by AI-generated images, we found evidence of overreliance, leading to unintended side effects: Participants were more susceptible to false claims accompanied by human-made images, and were more hesitant to believe true claims illustrated with labeled AI-generated images.

日本語まとめ
読み込み中…
読み込み中…
Privacy in Human-AI Romantic Relationships: Concerns, Boundaries, and Agency
説明

An increasing number of LLM-based applications are being developed to facilitate romantic relationships with AI partners, yet the safety and privacy risks in these partnerships remain largely underexplored. In this work, we investigate privacy in human–AI romantic relationships through an interview study (N=17), examining participants’ experiences and privacy perceptions across the three stages of exploration, intimacy, and dissolution, alongside an analysis of the platforms they used. We found that these relationships took varied forms, from one-to-one to one-to-many, and were shaped by multiple actors, including creators, platforms, and moderators. AI partners were perceived as having agency, actively negotiating privacy boundaries with participants and sometimes encouraging disclosure of personal details. As intimacy deepened, these boundaries became more permeable, though some participants expressed concerns such as conversation exposure and sought to preserve anonymity. Overall, AI platform affordances and diverse relational dynamics expand the privacy landscape, underscoring the need to rethink how privacy is constructed in human–AI romantic relationships.

日本語まとめ
読み込み中…
読み込み中…