Explainable Artificial Intelligence (XAI) is a discipline concerned with understanding predictions of AI systems. What is ultimately desired from XAI methods is for an AI system to link its input and output in a way that is interpretable with reference to the environment in which it is applied. A variety of methods have been proposed, but we argue in this paper that what has yet to be considered is miscommunication: the failure to convey and/or interpret an explanation accurately. XAI can be seen as a communication process and thus looking at how humans explain things to each other can provide guidance to its application and evaluation. We motivate a specific model of communication to help identify essential components of the process, and show the critical importance for establishing common ground, i.e., shared mutual knowledge, beliefs, and assumptions of the participants communicating.
https://dl.acm.org/doi/10.1145/3706598.3713771
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)