Emotion artificial intelligence (AI) is deployed in many high-impact areas. However, we know little about people's general attitudes towards and comfort with it across application domains. We conducted a survey with a U.S. representative sample, oversampling for marginalized groups who are more likely to experience emotion AI harms (i.e., people of color, disabled people, minoritized genders) (n=599). We find: 1) although comfort was distinct across 11 contexts, even the most favorable context (healthcare) yielded low comfort levels; 2) participants were significantly more comfortable with inferences of happiness and surprise compared to other emotions; 3) individuals with disabilities and minoritized genders were significantly less comfortable than others across a variety of contexts; and 4) perceived accuracy explained a large proportion of the variance in comfort levels across contexts. We argue that attending to identity is key in examining emotion AI's societal and ethical impacts, and discuss implications for emotion AI deployment and regulation.
https://dl.acm.org/doi/10.1145/3706598.3713501
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)