Working with AI (or not)

会議の名前
CHI 2025
Judging Phishing Under Uncertainty: How Do Users Handle Inaccurate Automated Advice?
要旨

Providing accurate and actionable advice about phishing emails is challenging. The majority of advice is generic and hard to implement. Phishing emails that pass through filters and land in user inboxes are usually sophisticated and exploit differences between how humans and computers interpret emails. Therefore, users need accurate and relevant guidance to take the right action. This study investigates the effectiveness of guidance based on features extracted from emails, which even in AI-driven systems can sometimes be inaccurate, leading to poor advice. We examined three conditions: control (generic advice), perfect advice, and realistic advice, through an online survey of 489 participants on Prolific, and measured user accuracy and confidence in phishing detection with and without guidance. Our findings indicate that having advice specific to the email is more effective than generic guidance (control). Inaccuracies in the guidance can also impact user decisions and reduce detection accuracy.

受賞
Honorable Mention
著者
Tarini Saka
University of Edinburgh, Edinburgh, United Kingdom
Kalliopi Vakali
University of Edinburgh, Edinburgh, United Kingdom
Adam D G. Jenkins
King's College London, London, United Kingdom
Nadin Kokciyan
University of Edinburgh, Edinburgh, United Kingdom
Kami Vaniea
University of Waterloo, Waterloo, Ontario, Canada
DOI

10.1145/3706598.3714267

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714267

動画
Social by Nature: How Socio-tecture Shapes the Work of SMBs and Considerations for Reimagining Collaborative Human-AI Systems
要旨

Globally, small and medium-sized businesses (SMBs) have had to adapt to rapid digital changes, a shift accelerated by the COVID-19 pandemic. In Kenya, this transition has involved a significant move towards digital management tools. While many had already experienced marked digitalization over the last few decades, they completed this work differently from their European and North American counterparts. This study explores how Kenyan SMBs continue to navigate these changes and considers the potential of Generative AI in this context. Applying the concept of socio-tecture—which emphasizes social networks, relational business practices, and employees as knowledge producers—we analyze how these elements influence SMB operations in Nairobi. We highlight how socio-tecture affects business performance and growth, and discuss how an Afro-centric strengths-based approach might offer unique opportunities and challenges with the influx of new technologies like Generative AI.

著者
Elizabeth Ankrah
University of California, Irvine, Irvine, California, United States
Kagonya Awori
Google, Nairobi, Kenya
Stephanie Nyairo
Microsoft Research Africa, Nairobi, Kenya
Mercy Muchai
Microsoft Research Africa, Nairobi, Kenya
Millicent Ochieng
Microsoft Research Africa, Nairobi, Kenya
Mark Kariuki
University of Nairobi, Nairobi, Kenya
Gillian R. Hayes
University of California, Irvine, Irvine, California, United States
Jacki O'Neill
Microsoft Research Africa, Nairobi, Kenya
DOI

10.1145/3706598.3715019

論文URL

https://dl.acm.org/doi/10.1145/3706598.3715019

動画
Objection Overruled! Lay People can Distinguish Large Language Models from Lawyers, but still Favour Advice from an LLM
要旨

Large Language Models (LLMs) are seemingly infiltrating every domain, and the legal context is no exception. In this paper, we present the results of three experiments (total N = 288) that investigated lay people's willingness to act upon, and their ability to discriminate between, LLM- and lawyer-generated legal advice. In Experiment 1, participants judged their willingness to act on legal advice when the source of the advice was either known or unknown. When the advice source was unknown, participants indicated that they were significantly more willing to act on the LLM-generated advice. The result of the source unknown condition was replicated in Experiment 2. Intriguingly, despite participants indicating higher willingness to act on LLM-generated advice in Experiments 1 and 2, participants discriminated between the LLM- and lawyer-generated texts significantly above chance-level in Experiment 3. Lastly, we discuss potential explanations and risks of our findings, limitations and future work.

著者
Eike Schneiders
University of Nottingham, Nottingham, United Kingdom
Tina Seabrooke
University of Southampton, Southampton, United Kingdom
Joshua Krook
University of Antwerp, Antwerp, Belgium
Richard Hyde
University of Nottingham, Nottingham, United Kingdom
Natalie Leesakul
University of Nottingham, Nottingham, Nottinghamshire, United Kingdom
Jeremie Clos
University of Nottingham, Nottingham, Nottinghamshire, United Kingdom
Joel E. Fischer
University of Nottingham, Nottingham, United Kingdom
DOI

10.1145/3706598.3713470

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713470

動画
AI Rivalry as a Craft: How Resisting and Embracing Generative AI Are Reshaping the Writing Profession
要旨

Generative AI (GAI) technologies are disrupting professional writing, challenging traditional practices. Recent studies explore GAI adoption experiences of creative practitioners, but we know little about how these experiences evolve into established practices and how GAI resistance alters these practices. To address this gap, we conducted 25 semi-structured interviews with writing professionals who adopted and/or resisted GAI. Using the theoretical lens of Job Crafting, we identify four strategies professionals employ to reshape their roles. Writing professionals employed GAI resisting strategies to maximize human potential, reinforce professional identity, carve out a professional niche, and preserve credibility within their networks. In contrast, GAI-enabled strategies allowed writers who embraced GAI to enhance desirable workflows, minimize mundane tasks, and engage in new AI-managerial labor. These strategies amplified their collaborations with GAI while reducing their reliance on other people. We conclude by discussing implications of GAI practices on writers' identity and practices as well as crafting theory.

著者
Rama Adithya Varanasi
New York University, New York City, New York, United States
Batia Mishan. Wiesenfeld
New York University, New York, New York, United States
Oded Nov
New York University, New York City, New York, United States
DOI

10.1145/3706598.3714035

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714035

動画
Generative AI Uses and Risks for Knowledge Workers in a Science Organization
要旨

Generative AI could enhance scientific discovery by supporting knowledge workers in science organizations. However, the real-world applications and perceived concerns of generative AI use in these organizations are uncertain. In this paper, we report on a collaborative study with a US national laboratory with employees spanning Science and Operations about their use of generative AI tools. We surveyed 66 employees, interviewed a subset (N=22), and measured early adoption of an internal generative AI interface called Argo lab-wide. We have four findings: (1) Argo usage data shows small but increasing use by Science and Operations employees; Common current and envisioned use cases for generative AI in this context conceptually fall into either a (2) copilot or (3) workflow agent modality; and (4) Concerns include sensitive data security, academic publishing, and job impacts. Based on our findings, we make recommendations for generative AI use in science and other organizations.

著者
Kelly B.. Wagman
University of Chicago, Chicago, Illinois, United States
Matthew T. Dearing
Argonne National Laboratory, Lemont, Illinois, United States
Marshini Chetty
University of Chicago, Chicago, Illinois, United States
DOI

10.1145/3706598.3713827

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713827

動画