How should AI Systems Talk to users when Collecting their Personal Information? Effects of Role Framing and Self-referencing on Human-AI Interaction

要旨

AI systems collect our personal information in order to provide personalized services, raising privacy concerns and making users leery. As a result, systems have begun emphasizing overt over covert collection of information by directly asking users. This poses an important question for ethical interaction design, which is dedicated to improving user experience while promoting informed decision-making: Should the interface tout the benefits of information disclosure and frame itself as a help-provider? Or, should it appear as a help-seeker? We decided to find out by creating a mockup of a news recommendation system called Mindz and conducting an online user study (N=293) with the following four variations: AI system as help seeker vs. help provider vs. both vs. neither. Data showed that even though all participants received the same recommendations, power users tended to trust a help-seeking Mindz more whereas non-power users favored one that is both help-seeker and help-provider.

著者
Mengqi Liao
The Pennsylvania State University, State College, Pennsylvania, United States
S. Shyam Sundar
The Pennsylvania State University, University Park, Pennsylvania, United States
DOI

10.1145/3411764.3445415

論文URL

https://doi.org/10.1145/3411764.3445415

動画

会議: CHI 2021

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2021.acm.org/)

セッション: Human-AI, Automation, Vehicles & Drones / Trust & Explainability

[A] Paper Room 15, 2021-05-13 17:00:00~2021-05-13 19:00:00 / [B] Paper Room 15, 2021-05-14 01:00:00~2021-05-14 03:00:00 / [C] Paper Room 15, 2021-05-14 09:00:00~2021-05-14 11:00:00
Paper Room 15
12 件の発表
2021-05-13 17:00:00
2021-05-13 19:00:00
日本語まとめ
読み込み中…