Explanations Help: Leveraging Human Capabilities to Detect Cyberattacks on Automated Vehicles

要旨

Existing defense strategies against cyberattacks on automated vehicles (AVs) often overlook the great potential of humans in detecting such attacks. To address this, we identified three types of human-detectable attacks targeting transportation infrastructure, AV perception modules, and AV execution modules. We proposed two types of displays: Alert and Alert plus Explanations (AlertExp), and conducted an online video survey study involving 260 participants to systematically evaluate the effectiveness of these displays across cyberattack types. Results showed that AV execution module attacks were the hardest to detect and understand, but AlertExp displays mitigated this difficulty. In contrast, AV perception module attacks were the easiest to detect, while infrastructure attacks resulted in the highest post-attack trust in the AV system. Although participants were prone to false alarms, AlertExp displays mitigated their negative impacts, whereas Alert displays performed worse than having no display. Overall, AlertExp displays are recommended to enhance human detection of cyberattacks.

著者
Yaohan Ding
University of Pittsburgh, Pittsburgh, Pennsylvania, United States
jun ying
Purdue University, WEST LAFAYETTE, Indiana, United States
Yiheng Feng
Purdue University, West Lafayette, Indiana, United States
Na Du
University of Pittsburgh, Pittsburgh, Pennsylvania, United States
DOI

10.1145/3706598.3714301

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714301

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Autonomus Vehicle

Annex Hall F204
7 件の発表
2025-05-01 18:00:00
2025-05-01 19:30:00
日本語まとめ
読み込み中…