132. Working with Intelligent Systems and Tools

Owning Mistakes Sincerely: Strategies for Mitigating AI Errors
説明

Interactive AI systems such as voice assistants are bound to make errors because of imperfect sensing and reasoning. Prior human-AI interaction research has illustrated the importance of various strategies for error mitigation in repairing the perception of an AI following a breakdown in service. These strategies include explanations, monetary rewards, and apologies. This paper extends prior work on error mitigation by exploring how different methods of apology conveyance may affect people's perceptions of AI agents; we report an online study (N=37) that examines how varying the sincerity of an apology and the assignment of blame (on either the agent itself or others) affects participants' perceptions and experience with erroneous AI agents. We found that agents that openly accepted the blame and apologized sincerely for mistakes were thought to be more intelligent, likeable, and effective in recovering from errors than agents that shifted the blame to others.

日本語まとめ
読み込み中…
読み込み中…
Exploring Technical Reasoning in Digital Tool Use
説明

The Technical Reasoning hypothesis in cognitive neuroscience posits that humans engage in physical tool use by reasoning about mechanical interactions among objects.

By modeling the use of objects as tools based on their abstract properties, this theory explains how tools can be re-purposed beyond their assigned function. This paper assesses the relevance of Technical Reasoning to digital tool use. We conducted an experiment with 16 participants that forced them to re-purpose commands to complete a text layout task. We analyzed self-reported scores of creative personality and experience with text editing, and found a significant association between re-purposing performance and creativity, but not with experience. Our results suggest that while most participants engaged in Technical Reasoning to re-purpose digital tools, some experienced "functional fixedness." This work contributes Technical Reasoning as a theoretical model for the design of digital tools.

日本語まとめ
読み込み中…
読み込み中…
What's the Appeal? Perceptions of Review Processes for Algorithmic Decisions
説明

If you were significantly impacted by an algorithmic decision, how would you want the decision to be reviewed? In this study, we explore perceptions of review processes for algorithmic decisions that differ across three dimensions: the reviewer, how the review is conducted, and how long the review takes. Using a choice-based conjoint analysis we find that people prefer review processes that provide for human review, the ability to participate in the review process, and a timely outcome. Using a survey, we find that people also see human review that provides for participation to be the fairest review process. Our qualitative analysis indicates that the fairest review process provides the greatest likelihood of a favourable outcome, an opportunity for the decision subject and their situation to be fully and accurately understood, human involvement, and dignity. These findings have implications for the design of contestation procedures and also the design of algorithmic decision-making processes.

日本語まとめ
読み込み中…
読み込み中…
"Look! It's a Computer Program! It's an Algorithm! It's AI!'': Does Terminology Affect Human Perceptions and Evaluations of Algorithmic Decision-Making Systems
説明

In the media, in policy-making, but also in research articles, algorithmic decision-making (ADM) systems are referred to as algorithms, artificial intelligence, and computer programs, amongst other terms. We hypothesize that such terminological differences can affect people's perceptions of properties of ADM systems, people's evaluations of systems in application contexts, and the replicability of research as findings may be influenced by terminological differences. In two studies (\textit{N} = 397, \textit{N} = 622), we show that terminology does indeed affect laypeople's perceptions of system properties (e.g., perceived complexity) and evaluations of systems (e.g., trust). Our findings highlight the need to be mindful when choosing terms to describe ADM systems, because terminology can have unintended consequences, and may impact the robustness and replicability of HCI research. Additionally, our findings indicate that terminology can be used strategically (e.g., in communication about ADM systems) to influence people's perceptions and evaluations of these systems.

日本語まとめ
読み込み中…
読み込み中…
Whose AI Dream? In search of the aspiration in data annotation
説明

Data is fundamental to AI/ML models. This paper investigates the work practices concerning data annotation as performed in the industry, in India. Previous human-centred investigations have largely focused on annotators’ subjectivity, bias and efficiency. We present a wider perspective of the data annotation: following a grounded approach, we conducted three sets of interviews with 25 annotators, 10 industry experts and 12 ML/AI practitioners. Our results show that the work of annotators is dictated by the interests, priorities and values of others above their station. More than technical, we contend that data annotation is a systematic exercise of power through organizational structure and practice. We propose a set of implications for how we can cultivate and encourage better practice to balance the tension between the need for high quality data at low cost and the annotators’ aspiration for well-being, career perspective, and active participation in building the AI dream.

日本語まとめ
読み込み中…
読み込み中…