How to Trick AI: Users' Strategies for Protecting Themselves from Automatic Personality Assessment

要旨

Psychological targeting tries to influence and manipulate users' behaviour. We investigated whether users can protect themselves from being profiled by a chatbot, which automatically assesses users' personality. Participants interacted twice with the chatbot: (1) They chatted for 45 minutes in customer service scenarios and received their actual profile (baseline). (2) They then were asked to repeat the interaction and to disguise their personality by strategically tricking the chatbot into calculating a falsified profile. In interviews, participants mentioned 41 different strategies but could only apply a subset of them in the interaction. They were able to manipulate all Big Five personality dimensions by nearly 10%. Participants regarded personality as very sensitive data. As they found tricking the AI too exhaustive for everyday use, we reflect on opportunities for privacy protective designs in the context of personality-aware systems.

キーワード
chatbot
automatic personality assessment
personality
著者
Sarah Theres Völkel
Ludwig Maximilian University of Munich, Munich, Germany
Renate Haeuslschmid
Madeira Interactive Technologies Institute & Ludwig Maximilian University of Munich, Funchal, Madeira Island, Portugal
Anna Werner
Ludwig Maximilian University of Munich, Munich, Germany
Heinrich Hussmann
Ludwig Maximilian University of Munich, Munich, Germany
Andreas Butz
Ludwig Maximilian University of Munich, Munich, Germany
DOI

10.1145/3313831.3376877

論文URL

https://doi.org/10.1145/3313831.3376877

会議: CHI 2020

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2020.acm.org/)

セッション: Coping with AI: not agAIn!

Paper session
316C MAUI
5 件の発表
2020-04-29 18:00:00
2020-04-29 19:15:00
日本語まとめ
読み込み中…