Human-AI, Automation, Vehicles & Drones / Trust & Explainability

[A] Paper Room 15, 2021-05-13 17:00:00~2021-05-13 19:00:00 / [B] Paper Room 15, 2021-05-14 01:00:00~2021-05-14 03:00:00 / [C] Paper Room 15, 2021-05-14 09:00:00~2021-05-14 11:00:00

会議の名前
CHI 2021
PinpointFly: An Egocentric Position-control Drone Interface using Mobile AR
要旨

Accurate drone positioning is challenging because pilots only have a limited position and direction perception of a flying drone from their perspective. This makes conventional joystick-based speed control inaccurate and more complicated and significantly degrades piloting performance. We propose PinpointFly, an egocentric drone interface that allows pilots to arbitrarily position and rotate a drone using position-control direct interactions on a see-through mobile AR where the drone position and direction are visualized with a virtual cast shadow (i.e., the drone's orthogonal projection onto the floor). Pilots can point to the next position or draw the drone's flight trajectory by manipulating the virtual cast shadow and the direction/height slider bar on the touchscreen. We design and implement a prototype of PinpointFly for indoor and visual line of sight scenarios, which are comprised of real-time and predefined motion-control techniques. We conduct two user studies with simple positioning and inspection tasks. Our results demonstrate that PinpointFly makes the drone positioning and inspection operations faster, more accurate, simpler and fewer workload than a conventional joystick interface with a speed-control method.

著者
Linfeng Chen
Tohoku University, Sendai, Japan
Kazuki Takashima
Tohoku University, Sendai, Japan
Kazuyuki Fujita
Tohoku University, Sendai, Miyagi, Japan
Yoshifumi Kitamura
Tohoku University, Sendai, Japan
DOI

10.1145/3411764.3445110

論文URL

https://doi.org/10.1145/3411764.3445110

動画
How should AI Systems Talk to users when Collecting their Personal Information? Effects of Role Framing and Self-referencing on Human-AI Interaction
要旨

AI systems collect our personal information in order to provide personalized services, raising privacy concerns and making users leery. As a result, systems have begun emphasizing overt over covert collection of information by directly asking users. This poses an important question for ethical interaction design, which is dedicated to improving user experience while promoting informed decision-making: Should the interface tout the benefits of information disclosure and frame itself as a help-provider? Or, should it appear as a help-seeker? We decided to find out by creating a mockup of a news recommendation system called Mindz and conducting an online user study (N=293) with the following four variations: AI system as help seeker vs. help provider vs. both vs. neither. Data showed that even though all participants received the same recommendations, power users tended to trust a help-seeking Mindz more whereas non-power users favored one that is both help-seeker and help-provider.

著者
Mengqi Liao
The Pennsylvania State University, State College, Pennsylvania, United States
S. Shyam Sundar
The Pennsylvania State University, University Park, Pennsylvania, United States
DOI

10.1145/3411764.3445415

論文URL

https://doi.org/10.1145/3411764.3445415

動画
Exploring and Promoting Diagnostic Transparency and Explainability in Online Symptom Checkers
要旨

Online symptom checkers (OSC) are widely used intelligent systems in health contexts such as primary care, remote healthcare, and epidemic control. OSCs use algorithms such as machine learning to facilitate self-diagnosis and triage based on symptoms input by healthcare consumers. However, intelligent systems' lack of transparency and comprehensibility could lead to unintended consequences such as misleading users, especially in high-stakes areas such as healthcare. In this paper, we attempt to enhance diagnostic transparency by augmenting OSCs with explanations. We first conducted an interview study (N=25) to specify user needs for explanations from users of existing OSCs. Then, we designed a COVID-19 OSC that was enhanced with three types of explanations. Our lab-controlled user study (N=20) found that explanations can significantly improve user experience in multiple aspects. We discuss how explanations are interwoven into conversation flow and present implications for future OSC designs.

著者
Chun-Hua Tsai
Pennsylvania State University, University Park, Pennsylvania, United States
Yue You
Pennsylvania State University, State College, Pennsylvania, United States
Xinning Gui
Pennsylvania State University, State College, Pennsylvania, United States
Yubo Kou
Pennsylvania State University, State College, Pennsylvania, United States
John M.. Carroll
Pennsylvania State University, University Park, Pennsylvania, United States
DOI

10.1145/3411764.3445101

論文URL

https://doi.org/10.1145/3411764.3445101

動画
User Trust in Assisted Decision-Making Using Miniaturized Near-Infrared Spectroscopy
要旨

We investigate the use of a miniaturized Near-Infrared Spectroscopy (NIRS) device in an assisted decision-making task. We consider the real-world scenario of determining whether food contains gluten, and we investigate how end-users interact with our NIRS detection device to ultimately make this judgment. In particular, we explore the effects of different nutrition labels and representations of confidence on participants’ perception and trust. Our results show that participants tend to be conservative in their judgment and are willing to trust the device in the absence of understandable label information. We further identify strategies to increase user trust in the system. Our work contributes to the growing body of knowledge on how NIRS can be mass-appropriated for everyday sensing tasks, and how to enhance the trustworthiness of assisted decision-making systems.

著者
Weiwei Jiang
The University of Melbourne, Melbourne, Australia
Zhanna Sarsenbayeva
University of Melbourne, Melbourne, Australia
Niels van Berkel
Aalborg University, Aalborg, Denmark
Chaofan Wang
The University of Melbourne, Melbourne, VIC, Australia
Difeng Yu
The University of Melbourne, Melbourne, VIC, Australia
Jing Wei
The University of Melbourne, Melbourne, Australia
Jorge Goncalves
The University of Melbourne, Melbourne, Australia
Vassilis Kostakos
University of Melbourne, Melbourne, Victoria, Australia
DOI

10.1145/3411764.3445710

論文URL

https://doi.org/10.1145/3411764.3445710

動画
Human-AI Interaction in Human Resource Management: Understanding Why Employees Resist Algorithmic Evaluation at Workplaces and How to Mitigate Burdens
要旨

Recently, Artificial Intelligence (AI) has been used to enable efficient decision-making in managerial and organizational contexts, ranging from employment to dismissal. However, to avoid employees’ antipathy toward AI, it is important to understand what aspects of AI employees like and/or dislike. In this paper, we aim to identify how employees perceive current human resource (HR) teams and future algorithmic management. Specifically, we explored what factors negatively influence employees’ perceptions of AI making work performance evaluations. Through in-depth interviews with 21 workers, we found that 1) employees feel six types of burdens (i.e., emotional, mental, bias, manipulation, privacy, and social) toward AI’s introduction to human resource management (HRM), and that 2) these burdens could be mitigated by incorporating transparency, interpretability, and human intervention to algorithmic decision-making. Based on our findings, we present design efforts to alleviate employees’ burdens. To leverage AI for HRM in fair and trustworthy ways, we call for the HCI community to design human-AI collaboration systems with various HR stakeholders.

著者
Hyanghee Park
Seoul National University, Seoul, Korea, Republic of
Daehwan Ahn
University of Pennsylvania, Philadelphia, Pennsylvania, United States
Kartik Hosanagar
Wharton School, Philadelphia, Pennsylvania, United States
Joonhwan Lee
Seoul National University, Seoul, Korea, Republic of
DOI

10.1145/3411764.3445304

論文URL

https://doi.org/10.1145/3411764.3445304

動画
Effects of Semantic Segmentation Visualization on Trust, Situation Awareness, and Cognitive Load in Highly Automated Vehicles
要旨

Autonomous vehicles could improve mobility, safety, and inclusion in traffic. While this technology seems within reach, its successful introduction depends on the intended user's acceptance. A substantial factor for this acceptance is trust in the autonomous vehicle's capabilities. Visualizing internal information processed by an autonomous vehicle could calibrate this trust by enabling the perception of the vehicle's detection capabilities (and its failures) while only inducing a low cognitive load. Additionally, the simultaneously raised situation awareness could benefit potential take-overs. We report the results of two comparative online studies on visualizing semantic segmentation information for the human user of autonomous vehicles. Effects on trust, cognitive load, and situation awareness were measured using a simulation (N = 32) and state-of-the-art panoptic segmentation on a pre-recorded real-world video (N = 41). Results show that the visualization using Augmented Reality increases situation awareness while remaining low cognitive load.

著者
Mark Colley
Ulm University, Ulm, Germany
Benjamin Eder
Ulm University, Ulm, Germany
Jan Ole Rixen
Institute of Media Informatics, Ulm, Germany
Enrico Rukzio
University of Ulm, Ulm, Germany
DOI

10.1145/3411764.3445351

論文URL

https://doi.org/10.1145/3411764.3445351

動画
Let’s Share a Ride into the Future: A Qualitative Study Comparing Potential Implementation Scenarios of Automated Vehicles.
要旨

Automated Vehicles (AVs) are expected to radically disrupt our mobility. Whereas much is speculated about how AVs will actually be implemented in the future, we argue that their advent should be taken as an opportunity to enhance all people’s mobility and improve their lives. Thus, it is important to focus on both the environment and the needs of target groups that have not been sufficiently considered in the past. In this paper, we present the findings from a qualitative study (N=11) of public attitude on hypothetical implementation scenarios for AVs. Our results indicate that people are aware of the benefits of shared mobility for the environment and society, and are generally open to using it. However, 1) emotional factors mitigate this openness and 2) security concerns were expressed by female participants. We recommend that identified concerns must be addressed to allow AVs fully exploiting their benefits for society and environment.

著者
Martina Schuß
Technische Hochschule Ingolstadt, Ingolstadt, Germany
Philipp Wintersberger
Technische Hochschule Ingolstadt, Ingolstadt, Germany
Andreas Riener
Technische Hochschule Ingolstadt, Ingolstadt, Bavaria, Germany
DOI

10.1145/3411764.3445609

論文URL

https://doi.org/10.1145/3411764.3445609

動画
Calibrating Pedestrians' Trust in Automated Vehicles: Does an Intent Display in an External HMI Support Trust Calibration and Safe Crossing Behavior?
要旨

Policymakers recommend that automated vehicles (AVs) display their automated driving status using an external human-machine interface (eHMI). However, previous studies suggest that a status eHMI is associated with overtrust, which might be overcome by an additional yielding intent message. We conducted a video-based laboratory study (N=67) to investigate pedestrians’ trust and crossing behavior in repeated encounters with AVs. In a 2x2 between-subjects design, we investigated (1) the occurrence of a malfunction (AV failing to yield) and (2) system transparency (status eHMI vs. status+intent eHMI). Results show that during initial encounters, trust gradually increases and crossing onset time decreases. After a malfunction, trust declines but recovers quickly. In the status eHMI group, trust was reduced more, and participants showed 7.3 times higher odds of colliding with the AV as compared to the status+intent group. We conclude that a status eHMI can cause pedestrians to overtrust AVs and advocate additional intent messages.

受賞
Honorable Mention
著者
Stefanie Martina. Faas
Mercedes-Benz AG, Boeblingen, Germany
Johannes Kraus
Ulm University, Ulm, Germany
Alexander Schoenhals
Mercedes-Benz AG, Boeblingen, Germany
Martin Baumann
Ulm University, Ulm, Germany
DOI

10.1145/3411764.3445738

論文URL

https://doi.org/10.1145/3411764.3445738

動画
A Taxonomy of Vulnerable Road Users for HCI Based On A Systematic Literature Review
要旨

Recent automotive research often focuses on automated driving, including the interaction between automated vehicles (AVs) and so-called "vulnerable road users" (VRUs). While road safety statistics and traffic psychology at least define VRUs as pedestrians, cyclists, and motorcyclists, many publications on human-vehicle interaction use the term without even defining it. The actual target group remains unclear. Since each group already poses a broad spectrum of research challenges, a one-fits-all solution seems unrealistic and inappropriate, and a much clearer differentiation is required. To foster clarity and comprehensibility, we propose a literature-based taxonomy providing a structured separation of (vulnerable) road users, designed to particularly (but not exclusively) support research on the communication between VRUs and AVs. It consists of two conceptual hierarchies and will help practitioners and researchers by providing a uniform and comparable set of terms needed for the design, implementation, and description of HCI applications.

著者
Kai Holländer
LMU Munich, Munich, Germany
Mark Colley
Ulm University, Ulm, Germany
Enrico Rukzio
University of Ulm, Ulm, Germany
Andreas Butz
LMU Munich, Munich, Germany
DOI

10.1145/3411764.3445480

論文URL

https://doi.org/10.1145/3411764.3445480

動画
What Matters in Professional Drone Pilots’ Practice? An Interview Study to Understand the Complexity of Their Work and Inform Human-Drone Interaction Research
要旨

Human-drone interaction is a growing topic of interest within HCI research. Researchers propose many innovative concepts for drone applications, but much of this research does not incorporate knowledge on existing applications already adopted by professionals. This limits the validity of said research. To address this limitation, we present our findings from an in-depth interview study with 10 professional drone pilots. Our participants were armed with significant experience and qualifications -- pertinent to both drone operations and a set of applications covering diverse industries. Our findings have resulted in design recommendations that should inform both ends and means of human-drone interaction research. These include, but are not limited to: safety-related protocols, insights from domain-specific use cases, and relevant practices outside of hands-on flight.

著者
Sara Ljungblad
University of Gothenburg and Chalmers University of Technology, Gothenburg, Sweden
Yemao Man
University of Gothenburg, Chalmers University of Technology, Gothenburg, Sweden
Mehmet Aydın Baytaş
Chalmers University of Technology, Gothenburg, Sweden
Mafalda Gamboa
Chalmers University of Technology, Gothenburg, Sweden
Mohammad Obaid
Chalmers University of Technology, Gothenburg, Sweden
Morten Fjeld
Chalmers University of Technology, Gothenburg, Sweden
DOI

10.1145/3411764.3445737

論文URL

https://doi.org/10.1145/3411764.3445737

動画
ExplAIn Yourself! Transparency for Positive UX in Autonomous Driving
要旨

In a fully autonomous driving situation, passengers hand over the steering control to a highly automated system. Autonomous driving behaviour may lead to confusion and negative user experience. When establishing such new technology, the user’s acceptance and understanding are crucial factors regarding success and failure. Using a driving simulator and a mobile application, we evaluated if system transparency during and after the interaction can increase the user experience and subjective feeling of safety and control. We contribute an initial guideline for autonomous driving experience design, bringing together the areas of user experience, explainable artificial intelligence and autonomous driving. The AVAM questionnaire, UEQ-S and interviews show that explanations during or after the ride help turn a negative user experience into a neutral one, which might be due to the increased feeling of control. However, we did not detect an effect for combining explanations during and after the ride.

著者
Tobias Schneider
Stuttgart Media University, Stuttgart, Baden-Württemberg, Germany
Joana Hois
Mercedes-Benz AG, Böblingen, Baden-Württemberg, Germany
Alischa Rosenstein
Mercedes-Benz AG, Böblingen, Baden-Württemberg, Germany
Sabiha Ghellal
Stuttgart Media University, Stuttgart, Baden-Württemberg, Germany
Dimitra Theofanou-Fülbier
Mercedes-Benz AG, Böblingen, Baden-Württemberg, Germany
Ansgar R.S.. Gerlicher
Stuttgart Media University, Stuttgart, Baden-Württemberg, Germany
DOI

10.1145/3411764.3446647

論文URL

https://doi.org/10.1145/3411764.3446647

動画
Disagree? You Must Be a Bot! How Beliefs Shape Twitter Profile Perceptions
要旨

In this paper, we investigate the human ability to distinguish political social bots from humans on Twitter. Following motivated reasoning theory from social and cognitive psychology, our central hypothesis is that especially those accounts which are opinion-incongruent are perceived as social bot accounts when the account is ambiguous about its nature. We also hypothesize that credibility ratings mediate this relationship. We asked N = 151 participants to evaluate 24 Twitter accounts and decide whether the accounts were humans or social bots. Findings support our motivated reasoning hypothesis for a sub-group of Twitter users (those who are more familiar with Twitter): Accounts that are opinion-incongruent are evaluated as relatively more bot-like than accounts that are opinion-congruent. Moreover, it does not matter whether the account is clearly social bot or human or ambiguous about its nature. This was mediated by perceived credibility in the sense that congruent profiles were evaluated to be more credible resulting in lower perceptions as bots.

著者
Magdalena Wischnewski
University of Duisburg-Essen, Duisburg, Germany
Rebecca Bernemann
University of Duisburg-Essen, Duisburg, Germany
Thao Ngo
University of Duisburg-Essen, Duisburg, Germany
Nicole Krämer
University of Duisburg-Essen, Duisburg, Germany
DOI

10.1145/3411764.3445109

論文URL

https://doi.org/10.1145/3411764.3445109

動画