Understanding Conversational and Expressive Style in a Multimodal Embodied Conversational Agent

要旨

Embodied conversational agents have changed the ways we can interact with machines. However, these systems often do not meet users' expectations. A limitation is that the agents are monotonic in behavior and do not adapt to an interlocutor. We present SIVA (a Socially Intelligent Virtual Agent), an expressive, embodied conversational agent that can recognize human behavior during open-ended conversations and automatically align its responses to the conversational and expressive style of the other party. SIVA leverages multimodal inputs to produce rich and perceptually valid responses (lip syncing and facial expressions) during the conversation. We conducted a user study (N=30) in which participants rated SIVA as being more empathetic and believable than the control (agent without style matching). Based on almost 10 hours of interaction, participants who preferred interpersonal involvement evaluated SIVA as significantly more animate than the participants who valued consideration and independence.

著者
Deepali Aneja
Adobe Research, Seattle, Washington, United States
Rens Hoegen
Institute for Creative Technologies, Los Angeles, California, United States
Daniel McDuff
Microsoft, Seattle, Washington, United States
Mary Czerwinski
Microsoft Research, Redmond, Washington, United States
DOI

10.1145/3411764.3445708

論文URL

https://doi.org/10.1145/3411764.3445708

動画

会議: CHI 2021

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2021.acm.org/)

セッション: Engineering Interactive Applications

[B] Paper Room 05, 2021-05-14 01:00:00~2021-05-14 03:00:00 / [C] Paper Room 05, 2021-05-14 09:00:00~2021-05-14 11:00:00 / [A] Paper Room 05, 2021-05-13 17:00:00~2021-05-13 19:00:00
Paper Room 05
14 件の発表
2021-05-14 01:00:00
2021-05-14 03:00:00
日本語まとめ
読み込み中…