Natural Expression of a Machine Learning Model's Uncertainty Through Verbal and Non-Verbal Behavior of Intelligent Virtual Agents

要旨

Uncertainty cues are inherent in natural human interaction, as they signal to communication partners how much they can rely on conveyed information. Humans subconsciously provide such signals both verbally (e.g., through expressions such as "maybe" or "I think") and non-verbally (e.g., by diverting their gaze). In contrast, artificial intelligence (AI)-based services and machine learning (ML) models such as ChatGPT usually do not disclose the reliability of answers to their users. In this paper, we explore the potential of combining ML models as powerful information sources with human means of expressing uncertainty to contextualize the information. We present a comprehensive pipeline that comprises (1) the human-centered collection of (non-)verbal uncertainty cues, (2) the transfer of cues to virtual agent videos, (3) the annotation of videos for perceived uncertainty, and (4) the subsequent training of a custom ML model that can generate uncertainty cues in virtual agent behavior. In a final step (5), the trained ML model is evaluated in terms of both fidelity and generalizability of the generated (non-)verbal uncertainty behavior.

著者
Susanne Schmidt
Universität Hamburg, Hamburg, Germany
Tim Rolff
Universität Hamburg, Hamburg, Hamburg, Germany
Henrik Voigt
Friedrich-Schiller-University, Jena, Germany
Micha Offe
Universität Hamburg, Hamburg, Germany
Frank Steinicke
Universität Hamburg, Hamburg, Germany
論文URL

https://doi.org/10.1145/3654777.3676454

動画

会議: UIST 2024

ACM Symposium on User Interface Software and Technology

セッション: 3. Validation in AI/ML

Westin: Allegheny 3
5 件の発表
2024-10-16 23:00:00
2024-10-17 00:15:00