Trust and Control in AI Systems

会議の名前
CHI 2022
Understanding the Impact of Explanations on Advice-Taking: a User Study for AI-based Clinical Decision Support Systems
要旨

The field of eXplainable Artificial Intelligence (XAI) focuses on providing explanations for AI systems' decisions. XAI applications to AI-based Clinical Decision Support Systems (DSS) should increase trust in the DSS by allowing clinicians to investigate the reasons behind its suggestions. In this paper, we present the results of a user study on the impact of advice from a clinical DSS on healthcare providers' judgment in two different cases: the case where the clinical DSS explains its suggestion and the case it does not. We examined the weight of advice, the behavioral intention to use the system, and the perceptions with quantitative and qualitative measures. Our results indicate a more significant impact of advice when an explanation for the DSS decision is provided. Additionally, through the open-ended questions, we provide some insights on how to improve the explanations in the diagnosis forecasts for healthcare assistants, nurses, and doctors.

受賞
Honorable Mention
著者
Cecilia Panigutti
Scuola Normale Superiore, Pisa, Italy, Italy
Andrea Beretta
CNR - Italian National Research Council, Pisa, Italy
Fosca Giannotti
Scuola Normale Superiore, Pisa, Italy
Dino Pedreschi
University of Pisa, Pisa, Italy
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3502104

動画
GANSlider: How Users Control Generative Models for Images using Multiple Sliders with and without Feedforward Information
要旨

We investigate how multiple sliders with and without feedforward visualizations influence users' control of generative models. In an online study (N=138), we collected a dataset of people interacting with a generative adversarial network (StyleGAN2) in an image reconstruction task. We found that more control dimensions (sliders) significantly increase task difficulty and user actions. Visual feedforward partly mitigates this by enabling more goal-directed interaction. However, we found no evidence of faster or more accurate task performance. This indicates a tradeoff between feedforward detail and implied cognitive costs, such as attention. Moreover, we found that visualizations alone are not always sufficient for users to understand individual control dimensions. Our study quantifies fundamental UI design factors and resulting interaction behavior in this context, revealing opportunities for improvement in the UI design for interactive applications of generative models. We close by discussing design directions and further aspects.

著者
Hai Dang
University of Bayreuth, Bayreuth, Germany
Lukas Mecke
Bundeswehr University Munich, Munich, Germany
Daniel Buschek
University of Bayreuth, Bayreuth, Germany
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3502141

動画
UX Research on Conversational Human-AI Interaction: A Literature Review of the ACM Digital Library
要旨

Early conversational agents (CAs) focused on dyadic human-AI interaction between humans and the CAs, followed by the increasing popularity of polyadic human-AI interaction, in which CAs are designed to mediate human-human interactions. CAs for polyadic interactions are unique because they encompass hybrid social interactions, i.e., human-CA, human-to-human, and human-to-group behaviors. However, research on polyadic CAs is scattered across different fields, making it challenging to identify, compare, and accumulate existing knowledge. To promote the future design of CA systems, we conducted a literature review of ACM publications and identified a set of works that conducted UX (user experience) research. We qualitatively synthesized the effects of polyadic CAs into four aspects of human-human interactions, i.e., communication, engagement, connection, and relationship maintenance. Through a mixed-method analysis of the selected polyadic and dyadic CA studies, we developed a suite of evaluation measurements on the effects. Our findings show that designing with social boundaries, such as privacy, disclosure, and identification, is crucial for ethical polyadic CAs. Future research should also advance usability testing methods and trust-building guidelines for conversational AI.

著者
Qingxiao Zheng
University of Illinois at Urbana-Champaign, Champaign, Illinois, United States
Yiliu Tang
University of Illinois at Urbana-Champaign, Champaign, Illinois, United States
Yiren Liu
University of Illinois Urbana-Champaign, Champaign, Illinois, United States
Weizi Liu
University of Illinois, Urbana-Champaign, Champaign, Illinois, United States
Yun Huang
University of Illinois at Urbana-Champaign, Champaign, Illinois, United States
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3501855

動画
The Trusted Listener: The Influence of Anthropomorphic Eye Design of Social Robots on User's Perception of Trustworthiness
要旨

Nowadays, social robots have become human’s important companions. The anthropomorphic features of robots, which are important in building natural user experience and trustable human-robot partnership, have attracted increasing attention. Among these features, eyes attract most audience’s attention and are particularly important. This study aims to investigate the influence of robot eye design on users’ trustworthiness perception. Specifically, a simulation robot model was developed. Three sets of experiments involving sixty-six participants were conducted to investigate the effects of (i) visual complexity of eye design, (ii) blink rate, and (iii) gaze aversion of social robots on users’ perceived trustworthiness. Results indicate that high visual complexity and gaze aversion lead to higher perceived trustworthiness and reveal a positive correlation between the perceived anthropomorphic effect of eye design and users’ perceived trust, while a non-significant effect of blink rate has been found. Preliminary suggestions are provided for the design of social robots in future works.

著者
Xinyu Zhu
Shanghai Jiao Tong University, Shanghai, China
Xingguo Zhang
Shanghai Jiao Tong University, Shanghai, China
Zinan Chen
Shanghai Jiao Tong University, Shanghai, China
Zhanxun DONG
Shanghai Jiao Tong University, Shanghai, China
Zhenyu Gu
Shanghai Jiao Tong University, shanghai, shanghai, China
Danni Chang
Shanghai Jiao Tong university, Shanghai, China
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3517670

動画
Model Positionality and Computational Reflexivity: Promoting Reflexivity in Data Science
要旨

Data science and machine learning provide indispensable techniques for understanding phenomena at scale, but the discretionary choices made when doing this work are often not recognized. Drawing from qualitative research practices, we describe how the concepts of positionality and reflexivity can be adapted to provide a framework for understanding, discussing, and disclosing the discretionary choices and subjectivity inherent to data science work. We first introduce the concepts of model positionality and computational reflexivity that can help data scientists to reflect on and communicate the social and cultural context of a model’s development and use, the data annotators and their annotations, and the data scientists themselves. We then describe the unique challenges of adapting these concepts for data science work and offer annotator fingerprinting and position mining as promising solutions. Finally, we demonstrate these techniques in a case study of the development of classifiers for toxic commenting in online communities.

受賞
Honorable Mention
著者
Scott Allen. Cambo
Northwestern University, Evanston, Illinois, United States
Darren Gergle
Northwestern University, Evanston, Illinois, United States
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3501998

動画