Working with Intelligent Systems and Tools

会議の名前
CHI 2022
Owning Mistakes Sincerely: Strategies for Mitigating AI Errors
要旨

Interactive AI systems such as voice assistants are bound to make errors because of imperfect sensing and reasoning. Prior human-AI interaction research has illustrated the importance of various strategies for error mitigation in repairing the perception of an AI following a breakdown in service. These strategies include explanations, monetary rewards, and apologies. This paper extends prior work on error mitigation by exploring how different methods of apology conveyance may affect people's perceptions of AI agents; we report an online study (N=37) that examines how varying the sincerity of an apology and the assignment of blame (on either the agent itself or others) affects participants' perceptions and experience with erroneous AI agents. We found that agents that openly accepted the blame and apologized sincerely for mistakes were thought to be more intelligent, likeable, and effective in recovering from errors than agents that shifted the blame to others.

著者
Amama Mahmood
Johns Hopkins University, Baltimore, Maryland, United States
Jeanie W. Fung
Johns Hopkins University, Baltimore, Maryland, United States
Isabel Won
The Johns Hopkins University, Baltimore, Maryland, United States
Chien-Ming Huang
Johns Hopkins University, Baltimore, Maryland, United States
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3517565

動画
Exploring Technical Reasoning in Digital Tool Use
要旨

The Technical Reasoning hypothesis in cognitive neuroscience posits that humans engage in physical tool use by reasoning about mechanical interactions among objects. By modeling the use of objects as tools based on their abstract properties, this theory explains how tools can be re-purposed beyond their assigned function. This paper assesses the relevance of Technical Reasoning to digital tool use. We conducted an experiment with 16 participants that forced them to re-purpose commands to complete a text layout task. We analyzed self-reported scores of creative personality and experience with text editing, and found a significant association between re-purposing performance and creativity, but not with experience. Our results suggest that while most participants engaged in Technical Reasoning to re-purpose digital tools, some experienced "functional fixedness." This work contributes Technical Reasoning as a theoretical model for the design of digital tools.

受賞
Honorable Mention
著者
Miguel A.. Renom
Université Paris-Saclay, CNRS, Inria, Orsay, France
Baptiste Caramiaux
Sorbonne Université, CNRS, ISIR, Paris, France
Michel Beaudouin-Lafon
Université Paris-Saclay, CNRS, Inria, Orsay, France
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3501877

動画
What's the Appeal? Perceptions of Review Processes for Algorithmic Decisions
要旨

If you were significantly impacted by an algorithmic decision, how would you want the decision to be reviewed? In this study, we explore perceptions of review processes for algorithmic decisions that differ across three dimensions: the reviewer, how the review is conducted, and how long the review takes. Using a choice-based conjoint analysis we find that people prefer review processes that provide for human review, the ability to participate in the review process, and a timely outcome. Using a survey, we find that people also see human review that provides for participation to be the fairest review process. Our qualitative analysis indicates that the fairest review process provides the greatest likelihood of a favourable outcome, an opportunity for the decision subject and their situation to be fully and accurately understood, human involvement, and dignity. These findings have implications for the design of contestation procedures and also the design of algorithmic decision-making processes.

著者
Henrietta Lyons
University of Melbourne, Melbourne, Australia
Senuri Wijenayake
The University of Sydney, Sydney, Australia
Tim Miller
Universtity of Melbourne, Melbourne, Australia
Eduardo Velloso
University of Melbourne, Melbourne, Victoria, Australia
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3517606

動画
"Look! It's a Computer Program! It's an Algorithm! It's AI!'': Does Terminology Affect Human Perceptions and Evaluations of Algorithmic Decision-Making Systems
要旨

In the media, in policy-making, but also in research articles, algorithmic decision-making (ADM) systems are referred to as algorithms, artificial intelligence, and computer programs, amongst other terms. We hypothesize that such terminological differences can affect people's perceptions of properties of ADM systems, people's evaluations of systems in application contexts, and the replicability of research as findings may be influenced by terminological differences. In two studies (\textit{N} = 397, \textit{N} = 622), we show that terminology does indeed affect laypeople's perceptions of system properties (e.g., perceived complexity) and evaluations of systems (e.g., trust). Our findings highlight the need to be mindful when choosing terms to describe ADM systems, because terminology can have unintended consequences, and may impact the robustness and replicability of HCI research. Additionally, our findings indicate that terminology can be used strategically (e.g., in communication about ADM systems) to influence people's perceptions and evaluations of these systems.

著者
Markus Langer
Universität des Saarlandes, Saarbrücken, Germany
Tim Hunsicker
Universität des Saarlandes, Saarbrücken, Germany
Tina Feldkamp
Universität des Saarlandes, Saarbrücken, Germany
Cornelius J.. König
Universität des Saarlandes, Saarbrücken, Germany
Nina Grgić-Hlača
Max Planck Institute for Software Systems, Saarbrücken, Germany
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3517527

動画
Whose AI Dream? In search of the aspiration in data annotation
要旨

Data is fundamental to AI/ML models. This paper investigates the work practices concerning data annotation as performed in the industry, in India. Previous human-centred investigations have largely focused on annotators’ subjectivity, bias and efficiency. We present a wider perspective of the data annotation: following a grounded approach, we conducted three sets of interviews with 25 annotators, 10 industry experts and 12 ML/AI practitioners. Our results show that the work of annotators is dictated by the interests, priorities and values of others above their station. More than technical, we contend that data annotation is a systematic exercise of power through organizational structure and practice. We propose a set of implications for how we can cultivate and encourage better practice to balance the tension between the need for high quality data at low cost and the annotators’ aspiration for well-being, career perspective, and active participation in building the AI dream.

著者
Ding Wang
Google Research India, Bangalore , India
Shantanu Prabhat
Google research, Bengaluru, Karnataka, India
Nithya Sambasivan
Google Research India, Bangalore, India
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3502121

動画