Autonomus Vehicle

会議の名前
CHI 2025
People Attribute Purpose to Autonomous Vehicles When Explaining Their Behavior: Insights from Cognitive Science for Explainable AI
要旨

It is often argued that effective human-centered explainable artificial intelligence (XAI) should resemble human reasoning. However, empirical investigations of how concepts from cognitive science can aid the design of XAI are lacking. Based on insights from cognitive science, we propose a framework of explanatory modes to analyze how people frame explanations, whether mechanistic, teleological, or counterfactual. Using the complex safety-critical domain of autonomous driving, we conduct an experiment consisting of two studies on (i) how people explain the behavior of a vehicle in 14 unique scenarios ($N_1=54$) and (ii) how they perceive these explanations ($N_2=382$), curating the novel Human Explanations for Autonomous Driving Decisions (HEADD) dataset. Our main finding is that participants deem teleological explanations significantly better quality than counterfactual ones, with perceived teleology being the best predictor of perceived quality. Based on our results, we argue that explanatory modes are an important axis of analysis when designing and evaluating XAI and highlight the need for a principled and empirically grounded understanding of the cognitive mechanisms of explanation. The HEADD dataset and our code are available at: \url{https://datashare.ed.ac.uk/handle/10283/8930}.

著者
Balint Gyevnar
University of Edinburgh, Edinburgh, United Kingdom
Stephanie Droop
University of Edinburgh, Edinburgh, United Kingdom
Tadeg Quillien
University of Edinburgh, Edinburgh, United Kingdom
Shay Cohen
University of Edinburgh, Edinburgh, United Kingdom
Neil R.. Bramley
University of Edinburgh, Edinburgh, Scotland, United Kingdom
Christopher Guy. Lucas
University of Edinburgh, Edinburgh, United Kingdom
Stefano V.. Albrecht
University of Edinburgh, Edinburgh, United Kingdom
DOI

10.1145/3706598.3713509

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713509

動画
Explanations Help: Leveraging Human Capabilities to Detect Cyberattacks on Automated Vehicles
要旨

Existing defense strategies against cyberattacks on automated vehicles (AVs) often overlook the great potential of humans in detecting such attacks. To address this, we identified three types of human-detectable attacks targeting transportation infrastructure, AV perception modules, and AV execution modules. We proposed two types of displays: Alert and Alert plus Explanations (AlertExp), and conducted an online video survey study involving 260 participants to systematically evaluate the effectiveness of these displays across cyberattack types. Results showed that AV execution module attacks were the hardest to detect and understand, but AlertExp displays mitigated this difficulty. In contrast, AV perception module attacks were the easiest to detect, while infrastructure attacks resulted in the highest post-attack trust in the AV system. Although participants were prone to false alarms, AlertExp displays mitigated their negative impacts, whereas Alert displays performed worse than having no display. Overall, AlertExp displays are recommended to enhance human detection of cyberattacks.

著者
Yaohan Ding
University of Pittsburgh, Pittsburgh, Pennsylvania, United States
jun ying
Purdue University, WEST LAFAYETTE, Indiana, United States
Yiheng Feng
Purdue University, West Lafayette, Indiana, United States
Na Du
University of Pittsburgh, Pittsburgh, Pennsylvania, United States
DOI

10.1145/3706598.3714301

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714301

動画
OptiCarVis: Improving Automated Vehicle Functionality Visualizations Using Bayesian Optimization to Enhance User Experience
要旨

Automated vehicle (AV) acceptance relies on their understanding via feedback. While visualizations aim to enhance user understanding of AV's detection, prediction, and planning functionalities, establishing an optimal design is challenging. Traditional "one-size-fits-all" designs might be unsuitable, stemming from resource-intensive empirical evaluations. This paper introduces OptiCarVis, a set of Human-in-the-Loop (HITL) approaches using Multi-Objective Bayesian Optimization (MOBO) to optimize AV feedback visualizations. We compare conditions using eight expert and user-customized designs for a Warm-Start HITL MOBO. An online study (N=117) demonstrates OptiCarVis efficacy in significantly improving trust, acceptance, perceived safety, and predictability without increasing cognitive load. OptiCarVis facilitates a comprehensive design space exploration, enhancing in-vehicle interfaces for optimal passenger experiences and broader applicability.

受賞
Honorable Mention
著者
Pascal Jansen
Ulm University, Ulm, Baden-Württemberg, Germany
Mark Colley
Ulm University, Ulm, Germany
Svenja Krauß
Ulm University, Ulm, Germany
Daniel Hirschle
Universität Ulm, Ulm, Baden-Württemberg, Germany
Enrico Rukzio
University of Ulm, Ulm, Germany
DOI

10.1145/3706598.3713514

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713514

動画
Moving Beyond the Simulator: Interaction-Based Drunk Driving Detection in a Real Vehicle Using Driver Monitoring Cameras and Real-Time Vehicle Data
要旨

Alcohol consumption poses a significant public health challenge, presenting serious risks to individual health and contributing to over 700 daily road fatalities worldwide. Digital interventions can play a crucial role in reducing these risks. However, reliable drunk driving detection systems are vital to effectively deliver these interventions. To develop and evaluate such a system, we conducted an interventional study on a test track to collect real vehicle data from 54 participants. Our system reliably identifies non-sober driving with an area under the receiver operating characteristic curve (AUROC) of 0.84 ± 0.11 and driving above the WHO-recommended blood alcohol concentration limit of 0.05 g/dL with an AUROC of 0.80 ± 0.10. Our models rely on well-known physiological drunk driving patterns. To the best of our knowledge, we are the first to (1) rigorously evaluate the potential of (2) driver monitoring cameras and real-time vehicle data for detecting drunk driving in a (3) real vehicle.

受賞
Best Paper
著者
Robin Deuber
ETH Zürich, Zürich, Switzerland
Patrick Langer
ETH Zurich, Zurich, Switzerland
Mathias Kraus
Professor for Explainable AI in Business Value Creation, Regensburg, Bavaria, Germany
Matthias Pfäffli
University of Bern, Bern, Switzerland
Matthias Bantle
Institute of Forensic Medicine, University of Bern , Bern, Switzerland
Filipe Barata
ETH Zurich, Zurich, Switzerland
Florian von Wangenheim
ETH Zurich, Zurich, Switzerland
Elgar Fleisch
ETH Zurich, Zurich, Switzerland
Wolfgang Weinmann
University of Bern, Bern, Switzerland
Felix Wortmann
University of St. Gallen, St. Gallen, Switzerland
DOI

10.1145/3706598.3714007

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714007

動画
Predicting Trust In Autonomous Vehicles: Modeling Young Adult Psychosocial Traits, Risk-Benefit Attitudes, And Driving Factors With Machine Learning
要旨

Low trust remains a significant barrier to Autonomous Vehicle (AV) adoption. To design trustworthy AVs, we need to better understand the individual traits, attitudes, and experiences that impact people's trust judgements. We use machine learning to understand the most important factors that contribute to young adult trust based on a comprehensive set of personal factors gathered via survey (n = 1457). Factors ranged from psychosocial and cognitive attributes to driving style, experiences, and perceived AV risks and benefits. Using the explainable AI technique SHAP, we found that perceptions of AV risks and benefits, attitudes toward feasibility and usability, institutional trust, prior experience, and a person's mental model are the most important predictors. Surprisingly, psychosocial and many technology- and driving-specific factors were not strong predictors. Results highlight the importance of individual differences for designing trustworthy AVs for diverse groups and lead to key implications for future design and research.

著者
Robert A. Kaufman
University of California, San Diego, La Jolla, California, United States
Emi Lee
University of California, San Diego, La Jolla, California, United States
Manas Satish Bedmutha
UC San Diego, La Jolla, California, United States
David Kirsh
University of California, San Diego, San Diego, California, United States
Nadir Weibel
UC San Diego, La Jolla, California, United States
DOI

10.1145/3706598.3713188

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713188

動画
Trust and Visual Focus in Automated Vehicles: A Comparative Study of Beginner and Experienced Drivers
要旨

This study investigated the relationship between trust in automation, gaze behavior, and driving performance in beginner and experienced drivers during a simulated driving session. Twenty participants completed a 17-minute drive across three conditions: manual driving, non-critical automated driving, and critical automated driving, with a non-driving-related task (NDRT) introduced between conditions to assess visual attention. Driving performance was evaluated using the Standard Deviation of Lateral Position (SDLP), and eye-tracking data in terms of mean gaze duration (MGD). While both groups demonstrated increased trust in the automated system post-session, beginners showed greater lateral position variability in critical conditions, suggesting over-reliance on automation. Eye-tracking analysis revealed significant changes in glance behavior across driving conditions, particularly in response to critical events. These findings highlight how driver experience shapes interactions with automated systems, emphasizing the importance of trust calibration in automated driving scenarios.

著者
Richa Singh
Tampere University , Tampere, Pirkanmaa, Finland
Mounia Ziat
Bentley University, Waltham, Massachusetts, United States
Oleg Spakov
Tampere University, Tampere, Finland
John Mäkelä
Tampere University, Tampere, Finland
Veikko Surakka
Tampere University, Tampere, Finland
Roope Raisamo
Tampere University, Tampere, Finland
DOI

10.1145/3706598.3713806

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713806

動画
What Did My Car Say? Impact of Autonomous Vehicle Explanation Errors and Driving Context On Comfort, Reliance, Satisfaction, and Driving Confidence
要旨

Explanations for autonomous vehicle (AV) decisions may build trust, however, explanations can contain errors. In a simulated driving study (n = 232), we tested how AV explanation errors, driving context characteristics (perceived harm and driving difficulty), and personal traits (prior trust and expertise) affected a passenger's comfort in relying on an AV, preference for control, confidence in the AV's ability, and explanation satisfaction. Errors negatively affected all outcomes. Surprisingly, despite identical driving, explanation errors reduced ratings of the AV's driving ability. Severity and potential harm amplified the negative impact of errors. Contextual harm and driving difficulty directly impacted outcome ratings and influenced the relationship between errors and outcomes. Prior trust and expertise were positively associated with outcome ratings. Results emphasize the need for accurate, contextually adaptive, and personalized AV explanations to foster trust, reliance, satisfaction, and confidence. We conclude with design, research, and deployment recommendations for trustworthy AV explanation systems.

著者
Robert A. Kaufman
University of California, San Diego, La Jolla, California, United States
Aaron Broukhim
University of California San Diego, San Diego, California, United States
David Kirsh
University of California, San Diego, San Diego, California, United States
Nadir Weibel
UC San Diego, La Jolla, California, United States
DOI

10.1145/3706598.3713088

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713088

動画