Giving Robots a Voice: Human-in-the-Loop Voice Creation and open-ended Labeling

要旨

Speech is a natural interface for humans to interact with robots. Yet, aligning a robot's voice to its appearance is challenging due to the rich vocabulary of both modalities. Previous research has explored a few labels to describe robots and tested them on a limited number of robots and existing voices. Here, we develop a robot-voice creation tool followed by large-scale behavioral human experiments (N=2,505). First, participants collectively tune robotic voices to match 175 robot images using an adaptive human-in-the-loop pipeline. Then, participants describe their impression of the robot or their matched voice using another human-in-the-loop paradigm for open-ended labeling. The elicited taxonomy is then used to rate robot attributes and to predict the best voice for an unseen robot. We offer a web interface to aid engineers in customizing robot voices, demonstrating the synergy between cognitive science and machine learning for engineering tools.

著者
Pol van Rijn
Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
Silvan Mertes
Augsburg University, Augsburg, Germany
Kathrin Janowski
Augsburg University, Augsburg, Germany
Katharina Weitz
Augsburg University, Augsburg, Germany
Nori Jacoby
Max Planck Institute for Empirical Aesthetics, Frankfurt, Deutschland, Germany
Elisabeth André
Augsburg University, Augsburg, Germany
論文URL

https://doi.org/10.1145/3613904.3642038

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Human-Robot Interaction A

318B
4 件の発表
2024-05-14 23:00:00
2024-05-15 00:20:00