Interaction of Thoughts: Towards Mediating Task Assignment in Human-AI Cooperation with a Capability-Aware Shared Mental Model
説明

The existing work on task assignment of human-AI cooperation did not consider the differences between individual team members regarding their capabilities, leading to sub-optimal task completion results. In this work, we propose a capability-aware shared mental model (CASMM) with the components of task grouping and negotiation, which utilize tuples to break down tasks into sets of scenarios relating to difficulties and then dynamically merge the task grouping ideas raised by human and AI through negotiation. We implement a prototype system and a 3-phase user study for the proof of concept via an image labeling task. The result shows building CASMM boosts the accuracy and time efficiency significantly through forming the task assignment close to real capabilities within few iterations. It helps users better understand the capability of AI and themselves. Our method has the potential to generalize to other scenarios such as medical diagnoses and automatic driving in facilitating better human-AI cooperation.

日本語まとめ
読み込み中…
読み込み中…
Are Two Heads Better Than One in AI-Assisted Decision Making? Comparing the Behavior and Performance of Groups and Individuals in Human-AI Collaborative Recidivism Risk Assessment
説明

With the prevalence of AI assistance in decision making, a more relevant question to ask than the classical question of ``are two heads better than one?’’ is how groups’ behavior and performance in AI-assisted decision making compare with those of individuals'. In this paper, we conduct a case study to compare groups and individuals in human-AI collaborative recidivism risk assessment along six aspects, including decision accuracy and confidence, appropriateness of reliance on AI, understanding of AI, decision-making fairness, and willingness to take accountability. Our results highlight that compared to individuals, groups rely on AI models more regardless of their correctness, but they are more confident when they overturn incorrect AI recommendations. We also find that groups make fairer decisions than individuals according to the accuracy equality criterion, and groups are willing to give AI more credit when they make correct decisions. We conclude by discussing the implications of our work.

日本語まとめ
読み込み中…
読み込み中…
Comparing Zealous and Restrained AI Recommendations in a Real-World Human-AI Collaboration Task
説明

When designing an AI-assisted decision-making system, there is often a tradeoff between precision and recall in the AI's recommendations. We argue that careful exploitation of this tradeoff can harness the complementary strengths in the human-AI collaboration to significantly improve team performance. We investigate a real-world video anonymization task for which recall is paramount and more costly to improve. We analyze the performance of 78 professional annotators working with a) no AI assistance, b) a high-precision "restrained" AI, and c) a high-recall "zealous" AI in over 3,466 person-hours of annotation work. In comparison, the zealous AI helps human teammates achieve significantly shorter task completion time and higher recall. In a follow-up study, we remove AI assistance for everyone and find negative training effects on annotators trained with the restrained AI. These findings and our analysis point to important implications for the design of AI assistance in recall-demanding scenarios.

日本語まとめ
読み込み中…
読み込み中…
Don’t Just Tell Me, Ask Me: AI Systems that Intelligently Frame Explanations as Questions Improve Human Logical Discernment Accuracy over Causal AI explanations
説明

Critical thinking is an essential human skill. Despite the importance of critical thinking, research reveals that our reasoning ability suffers from personal biases and cognitive resource limitations, leading to potentially dangerous outcomes. This paper presents the novel idea of AI-framed Questioning that turns information relevant to the AI classification into questions to actively engage users' thinking and scaffold their reasoning process. We conducted a study with 204 participants comparing the effects of AI-framed Questioning on a critical thinking task; discernment of logical validity of socially divisive statements. Our results show that compared to no feedback and even causal AI explanations of an always correct system, AI-framed Questioning significantly increase human discernment of logically flawed statements. Our experiment exemplifies a future style of Human-AI co-reasoning system, where the AI becomes a critical thinking stimulator rather than an information teller.

日本語まとめ
読み込み中…
読み込み中…
Competent but Rigid: Identifying the Gap in Empowering AI to Participate Equally in Group Decision-Making
説明

Existing research on human-AI collaborative decision-making focuses mainly on the interaction between AI and individual decision-makers. There is a limited understanding of how AI may perform in group decision-making. This paper presents a wizard-of-oz study in which two participants and an AI form a committee to rank three English essays. One novelty of our study is that we adopt a speculative design by endowing AI equal power to humans in group decision-making. We enable the AI to discuss and vote equally with other human members. We find that although the voice of AI is considered valuable, AI still plays a secondary role in the group because it cannot fully follow the dynamics of the discussion and make progressive contributions. Moreover, the divergent opinions of our participants regarding an "equal AI" shed light on the possible future of human-AI relations.

日本語まとめ
読み込み中…
読み込み中…
Augmenting Pathologists with NaviPath: Design and Evaluation of a Human-AI Collaborative Navigation System
説明

Artificial Intelligence (AI) brings advancements to support pathologists in navigating high-resolution tumor images to search for pathology patterns of interest. However, existing AI-assisted tools have not realized this promised potential due to a lack of insight into pathology and HCI considerations for pathologists' navigation workflows in practice. We first conducted a formative study with six medical professionals in pathology to capture their navigation strategies. By incorporating our observations along with the pathologists' domain knowledge, we designed NaviPath -- a human-AI collaborative navigation system. An evaluation study with 15 medical professionals in pathology indicated that: (i) compared to the manual navigation, participants saw more than twice the number of pathological patterns in unit time with NaviPath, and (ii) participants achieved higher precision and recall against the AI and the manual navigation on average. Further qualitative analysis revealed that navigation was more consistent with NaviPath, which can improve the overall examination quality.

日本語まとめ
読み込み中…
読み込み中…