51. Human-AI, Automation, Vehicles & Drones / Trust & Explainability

終了した勉強会

この勉強会は終了しました。ご参加ありがとうございました。

PinpointFly: An Egocentric Position-control Drone Interface using Mobile AR
説明

Accurate drone positioning is challenging because pilots only have a limited position and direction perception of a flying drone from their perspective. This makes conventional joystick-based speed control inaccurate and more complicated and significantly degrades piloting performance. We propose PinpointFly, an egocentric drone interface that allows pilots to arbitrarily position and rotate a drone using position-control direct interactions on a see-through mobile AR where the drone position and direction are visualized with a virtual cast shadow (i.e., the drone's orthogonal projection onto the floor). Pilots can point to the next position or draw the drone's flight trajectory by manipulating the virtual cast shadow and the direction/height slider bar on the touchscreen. We design and implement a prototype of PinpointFly for indoor and visual line of sight scenarios, which are comprised of real-time and predefined motion-control techniques. We conduct two user studies with simple positioning and inspection tasks. Our results demonstrate that PinpointFly makes the drone positioning and inspection operations faster, more accurate, simpler and fewer workload than a conventional joystick interface with a speed-control method.

日本語まとめ
読み込み中…
読み込み中…
How should AI Systems Talk to users when Collecting their Personal Information? Effects of Role Framing and Self-referencing on Human-AI Interaction
説明

AI systems collect our personal information in order to provide personalized services, raising privacy concerns and making users leery. As a result, systems have begun emphasizing overt over covert collection of information by directly asking users. This poses an important question for ethical interaction design, which is dedicated to improving user experience while promoting informed decision-making: Should the interface tout the benefits of information disclosure and frame itself as a help-provider? Or, should it appear as a help-seeker? We decided to find out by creating a mockup of a news recommendation system called Mindz and conducting an online user study (N=293) with the following four variations: AI system as help seeker vs. help provider vs. both vs. neither. Data showed that even though all participants received the same recommendations, power users tended to trust a help-seeking Mindz more whereas non-power users favored one that is both help-seeker and help-provider.

日本語まとめ
読み込み中…
読み込み中…
Exploring and Promoting Diagnostic Transparency and Explainability in Online Symptom Checkers
説明

Online symptom checkers (OSC) are widely used intelligent systems in health contexts such as primary care, remote healthcare, and epidemic control. OSCs use algorithms such as machine learning to facilitate self-diagnosis and triage based on symptoms input by healthcare consumers. However, intelligent systems' lack of transparency and comprehensibility could lead to unintended consequences such as misleading users, especially in high-stakes areas such as healthcare. In this paper, we attempt to enhance diagnostic transparency by augmenting OSCs with explanations. We first conducted an interview study (N=25) to specify user needs for explanations from users of existing OSCs. Then, we designed a COVID-19 OSC that was enhanced with three types of explanations. Our lab-controlled user study (N=20) found that explanations can significantly improve user experience in multiple aspects. We discuss how explanations are interwoven into conversation flow and present implications for future OSC designs.

日本語まとめ
読み込み中…
読み込み中…
User Trust in Assisted Decision-Making Using Miniaturized Near-Infrared Spectroscopy
説明

We investigate the use of a miniaturized Near-Infrared Spectroscopy (NIRS) device in an assisted decision-making task. We consider the real-world scenario of determining whether food contains gluten, and we investigate how end-users interact with our NIRS detection device to ultimately make this judgment. In particular, we explore the effects of different nutrition labels and representations of confidence on participants’ perception and trust. Our results show that participants tend to be conservative in their judgment and are willing to trust the device in the absence of understandable label information. We further identify strategies to increase user trust in the system. Our work contributes to the growing body of knowledge on how NIRS can be mass-appropriated for everyday sensing tasks, and how to enhance the trustworthiness of assisted decision-making systems.

日本語まとめ
読み込み中…
読み込み中…
Human-AI Interaction in Human Resource Management: Understanding Why Employees Resist Algorithmic Evaluation at Workplaces and How to Mitigate Burdens
説明

Recently, Artificial Intelligence (AI) has been used to enable efficient decision-making in managerial and organizational contexts, ranging from employment to dismissal. However, to avoid employees’ antipathy toward AI, it is important to understand what aspects of AI employees like and/or dislike. In this paper, we aim to identify how employees perceive current human resource (HR) teams and future algorithmic management. Specifically, we explored what factors negatively influence employees’ perceptions of AI making work performance evaluations. Through in-depth interviews with 21 workers, we found that 1) employees feel six types of burdens (i.e., emotional, mental, bias, manipulation, privacy, and social) toward AI’s introduction to human resource management (HRM), and that 2) these burdens could be mitigated by incorporating transparency, interpretability, and human intervention to algorithmic decision-making. Based on our findings, we present design efforts to alleviate employees’ burdens. To leverage AI for HRM in fair and trustworthy ways, we call for the HCI community to design human-AI collaboration systems with various HR stakeholders.

日本語まとめ
読み込み中…
読み込み中…
Effects of Semantic Segmentation Visualization on Trust, Situation Awareness, and Cognitive Load in Highly Automated Vehicles
説明

Autonomous vehicles could improve mobility, safety, and inclusion in traffic.

While this technology seems within reach, its successful introduction depends on the intended user's acceptance. A substantial factor for this acceptance is trust in the autonomous vehicle's capabilities. Visualizing internal information processed by an autonomous vehicle could calibrate this trust by enabling the perception of the vehicle's detection capabilities (and its failures) while only inducing a low cognitive load. Additionally, the simultaneously raised situation awareness could benefit potential take-overs.

We report the results of two comparative online studies on visualizing semantic segmentation information for the human user of autonomous vehicles. Effects on trust, cognitive load, and situation awareness were measured using a simulation (N = 32) and state-of-the-art panoptic segmentation on a pre-recorded real-world video (N = 41).

Results show that the visualization using Augmented Reality increases situation awareness while remaining low cognitive load.

日本語まとめ
読み込み中…
読み込み中…
Let’s Share a Ride into the Future: A Qualitative Study Comparing Potential Implementation Scenarios of Automated Vehicles.
説明

Automated Vehicles (AVs) are expected to radically disrupt our mobility. Whereas much is speculated about how AVs will actually be implemented in the future, we argue that their advent should be taken as an opportunity to enhance all people’s mobility and improve their lives.

Thus, it is important to focus on both the environment and the needs of target groups that have not been sufficiently considered in the past. In this paper, we present the findings from a qualitative study (N=11) of public attitude on hypothetical implementation scenarios for AVs.

Our results indicate that people are aware of the benefits of shared mobility for the environment and society, and are generally open to using it. However, 1) emotional factors mitigate this openness and 2) security concerns were expressed by female participants. We recommend that identified concerns must be addressed to allow AVs fully exploiting their benefits for society and environment.

日本語まとめ
読み込み中…
読み込み中…
Calibrating Pedestrians' Trust in Automated Vehicles: Does an Intent Display in an External HMI Support Trust Calibration and Safe Crossing Behavior?
説明

Policymakers recommend that automated vehicles (AVs) display their automated driving status using an external human-machine interface (eHMI). However, previous studies suggest that a status eHMI is associated with overtrust, which might be overcome by an additional yielding intent message. We conducted a video-based laboratory study (N=67) to investigate pedestrians’ trust and crossing behavior in repeated encounters with AVs. In a 2x2 between-subjects design, we investigated (1) the occurrence of a malfunction (AV failing to yield) and (2) system transparency (status eHMI vs. status+intent eHMI). Results show that during initial encounters, trust gradually increases and crossing onset time decreases. After a malfunction, trust declines but recovers quickly. In the status eHMI group, trust was reduced more, and participants showed 7.3 times higher odds of colliding with the AV as compared to the status+intent group. We conclude that a status eHMI can cause pedestrians to overtrust AVs and advocate additional intent messages.

日本語まとめ
読み込み中…
読み込み中…
A Taxonomy of Vulnerable Road Users for HCI Based On A Systematic Literature Review
説明

Recent automotive research often focuses on automated driving, including the interaction between automated vehicles (AVs) and so-called "vulnerable road users" (VRUs).

While road safety statistics and traffic psychology at least define VRUs as pedestrians, cyclists, and motorcyclists, many publications on human-vehicle interaction use the term without even defining it. The actual target group remains unclear. Since each group already poses a broad spectrum of research challenges, a one-fits-all solution seems unrealistic and inappropriate, and a much clearer differentiation is required.

To foster clarity and comprehensibility,

we propose a literature-based taxonomy providing a structured separation of (vulnerable) road users, designed to particularly (but not exclusively) support research on the communication between VRUs and AVs.

It consists of two conceptual hierarchies and will help practitioners and researchers by providing a uniform and comparable set of terms needed for the design, implementation, and description of HCI applications.

日本語まとめ
読み込み中…
読み込み中…
What Matters in Professional Drone Pilots’ Practice? An Interview Study to Understand the Complexity of Their Work and Inform Human-Drone Interaction Research
説明

Human-drone interaction is a growing topic of interest within HCI research. Researchers propose many innovative concepts for drone applications, but much of this research does not incorporate knowledge on existing applications already adopted by professionals. This limits the validity of said research. To address this limitation, we present our findings from an in-depth interview study with 10 professional drone pilots. Our participants were armed with significant experience and qualifications -- pertinent to both drone operations and a set of applications covering diverse industries. Our findings have resulted in design recommendations that should inform both ends and means of human-drone interaction research. These include, but are not limited to: safety-related protocols, insights from domain-specific use cases, and relevant practices outside of hands-on flight.

日本語まとめ
読み込み中…
読み込み中…
ExplAIn Yourself! Transparency for Positive UX in Autonomous Driving
説明

In a fully autonomous driving situation, passengers hand over the steering control to a highly automated system. Autonomous driving behaviour may lead to confusion and negative user experience. When establishing such new technology, the user’s acceptance and understanding are crucial factors regarding success and failure. Using a driving simulator and a mobile application, we evaluated if system transparency during and after the interaction can increase the user experience and subjective feeling of safety and control. We contribute an initial guideline for autonomous driving experience design, bringing together the areas of user experience, explainable artificial intelligence and autonomous driving. The AVAM questionnaire, UEQ-S and interviews show that explanations during or after the ride help turn a negative user experience into a neutral one, which might be due to the increased feeling of control. However, we did not detect an effect for combining explanations during and after the ride.

日本語まとめ
読み込み中…
読み込み中…
Disagree? You Must Be a Bot! How Beliefs Shape Twitter Profile Perceptions
説明

In this paper, we investigate the human ability to distinguish political social bots from humans on Twitter. Following motivated reasoning theory from social and cognitive psychology, our central hypothesis is that especially those accounts which are opinion-incongruent are perceived as social bot accounts when the account is ambiguous about its nature. We also hypothesize that credibility ratings mediate this relationship. We asked N = 151 participants to evaluate 24 Twitter accounts and decide whether the accounts were humans or social bots. Findings support our motivated reasoning hypothesis for a sub-group of Twitter users (those who are more familiar with Twitter): Accounts that are opinion-incongruent are evaluated as relatively more bot-like than accounts that are opinion-congruent. Moreover, it does not matter whether the account is clearly social bot or human or ambiguous about its nature. This was mediated by perceived credibility in the sense that congruent profiles were evaluated to be more credible resulting in lower perceptions as bots.

日本語まとめ
読み込み中…
読み込み中…