"I Didn't Know I Looked Angry": Characterizing Observed Emotion and Reported Affect at Work


With the growing prevalence of affective computing applications, Automatic Emotion Recognition (AER) technologies have garnered attention in both research and industry settings. Initially limited to speech-based applications, AER technologies now include analysis of facial landmarks to provide predicted probabilities of a common subset of emotions (e.g., anger, happiness) for faces observed in an image or video frame. In this paper, we study the relationship between AER outputs and self-reports of affect employed by prior work, in the context of information work at a technology company. We compare the continuous observed emotion output from an AER tool to discrete reported affect obtained via a one-day combined tool-use and diary study (N=15). We provide empirical evidence showing that these signals do not completely align, and find that using additional workplace context only improves alignment up to 58.6%. These results suggest affect must be studied in the context it is being expressed, and observed emotion signal should not replace internal reported affect for affective computing applications.

Harmanpreet Kaur
University of Michigan, Ann Arbor, Michigan, United States
Daniel McDuff
Microsoft, Seattle, Washington, United States
Alex C. Williams
University of Tennessee, Knoxville, Knoxville, Tennessee, United States
Jaime Teevan
Microsoft, Redmond, Washington, United States
Shamsi Iqbal
Microsoft Research, Redmond, Washington, United States



会議: CHI 2022

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2022.acm.org/)

セッション: Emotions

4 件の発表
2022-05-02 20:00:00
2022-05-02 21:15:00