EmotiV: Exploring Automatic Emotion Sharing through Facial Expression Recognition (FER) for Online Co-Watching

要旨

Online video watching has become prevalent, so are technologies to promote a sense of co-watching across distances. However, most co-watching technologies require active input from users (e.g. through text-based interactions) or rely on special devices. This paper presents EmotiV, a prototype designed to bring the co-watching experience to users without additional effort or devices, by automatically capturing and sharing viewers' emotions through Facial Expression Recognition (FER). A user study with 20 participants using a comedy movie-watching scenario shows that EmotiV helped bring a sense of togetherness, aliveness and fun, and was appreciated to be more timely and authentic although with less control in comparison to traditional text-based interaction. Meanwhile, it also helped promote self-awareness and reflections, with privacy concerns to be addressed. These findings suggest that FER can serve as a lightweight and non-intrusive mechanism for augmenting remote co-watching, offering design insights for affect-aware computing to support everyday media consumption.

著者
Yusen Zhang
University of Glasgow, Glasgow, United Kingdom
Edmond S. L. Ho
University of Glasgow, Glasgow, United Kingdom
Xianghua(Sharon) Ding
University of Glasgow, Glasgow, Lanarkshire, United Kingdom

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Mental Wellbeing

P1 - Room 115
7 件の発表
2026-04-16 20:15:00
2026-04-16 21:45:00