Games that feature multiple players, limited communication, and partial information are particularly challenging for AI agents. In the cooperative card game Hanabi, which possesses all of these attributes, AI agents fail to achieve scores comparable to even first-time human players. Through an observational study of three mixed-skill Hanabi play groups, we identify the techniques used by humans that help to explain their superior performance compared to AI. These concern physical artefact manipulation, coordination play, role establishment, and continual rule negotiation. Our findings extend previous accounts of human performance in Hanabi, which are purely in terms of theory-of-mind reasoning, by revealing more precisely how this form of collective decision-making is enacted in skilled human play. Our interpretation points to a gap in the current capabilities of AI agents to perform cooperative tasks.
https://doi.org/10.1145/3544548.3581550
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)