For What It's Worth: Humans Overwrite Their Economic Self-Interest to Avoid Bargaining With AI Systems

要旨

As algorithms are increasingly augmenting and substituting human decision-making, understanding how the introduction of computational agents changes the fundamentals of human behavior becomes vital. This pertains to not only users, but also those parties who face the consequences of an algorithmic decision. In a controlled experiment with 480 participants, we exploit an extended version of two-player ultimatum bargaining where responders choose to bargain with either another human, another human with an AI decision aid or an autonomous AI-system acting on behalf of a passive human proposer. Our results show strong responder preferences against the algorithm, as most responders opt for a human opponent and demand higher compensation to reach a contract with autonomous agents. To map these preferences to economic expectations, we elicit incentivized subject beliefs about their opponent’s behavior. The majority of responders maximize their expected value when this is line with approaching the human proposer. In contrast, responders predicting income maximization for the autonomous AI-system overwhelmingly override economic self-interest to avoid the algorithm.

著者
Alexander Erlei
University of Goettingen, Goettingen, Germany
Richeek Das
Indian Institute of Technology Bombay, Mumbai, India
Lukas Meub
University of Goettingen, Goettingen, Germany
Avishek Anand
Leibniz Universität Hannover, Hannover, Germany
Ujwal Gadiraju
Delft University of Technology, Delft, Netherlands
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3517734

動画

会議: CHI 2022

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2022.acm.org/)

セッション: Agents in the Loop

292
5 件の発表
2022-05-04 18:00:00
2022-05-04 19:15:00