Studying the Effects of Cognitive Biases in Evaluation of Conversational Agents

要旨

Humans quite frequently interact with conversational agents. The rapid advancement in generative language modeling through neural networks has helped advance the creation of intelligent conversational agents. Researchers typically evaluate the output of their models through crowdsourced judgments, but there are no established best practices for conducting such studies. Moreover, it is unclear if cognitive biases in decision-making are affecting crowdsourced workers' judgments when they undertake these tasks. To investigate, we conducted a between-subjects study with 77 crowdsourced workers to understand the role of cognitive biases, specifically anchoring bias, when humans are asked to evaluate the output of conversational agents. Our results provide insight into how best to evaluate conversational agents. We find increased consistency in ratings across two experimental conditions may be a result of anchoring bias. We also determine that external factors such as time and prior experience in similar tasks have effects on inter-rater consistency.

受賞
Honorable Mention
キーワード
Conversational agents
Human evaluation
Anchoring bias
Experiment design
著者
Sashank Santhanam
University of North Carolina at Charlotte, Charlotte, NC, USA
Alireza Karduni
University of North Carolina at Charlotte, Charlotte, NC, USA
Samira Shaikh
University of North Carolina at Charlotte, Charlotte, NC, USA
DOI

10.1145/3313831.3376318

論文URL

https://doi.org/10.1145/3313831.3376318

動画

会議: CHI 2020

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2020.acm.org/)

セッション: Biases & the effects of interfaces

Paper session
316B MAUI
5 件の発表
2020-04-28 01:00:00
2020-04-28 02:15:00
日本語まとめ
読み込み中…