Embodied virtual agents (EVAs) have been widely used in personal companionship, where providing emotion feedback is a core function. While prior research has primarily examined rules for selecting feedback emotional categories, it remains unclear which emotional intensity feedback rule maximizes user likability. To address this, we induced varying intensities of happiness and sadness in participants through video stimuli and presented EVAs with different intensities of facial emotion feedback. Participants rated EVAs’ likability, empathy, and reported their expected EVAs. Results showed that in positive states, the most liked EVA (ML-EVA) aligned with the most empathized EVA (ME-EVA), whereas in negative states, it diverged from both ME-EVA and expected EVA. Moreover, ML-EVAs did not mirror participants’ emotional intensity. Based on the ML-EVAs’ findings, we developed a continuous-intensity emotion feedback model, which outperformed other baseline models under both facial-only and facial-voice conditions, offering guidelines for optimizing EVAs’ emotion feedback.
ACM CHI Conference on Human Factors in Computing Systems