ExpresSense: Exploring a Standalone Smartphone to Sense Engagement of Users from Facial Expressions Using Acoustic Sensing

要旨

Facial expressions have been considered a metric reflecting a person’s engagement with a task. While the evolution of expression detection methods is consequential, the foundation remains mostly on image processing techniques that suffer from occlusion, ambient light, and privacy concerns. In this paper, we propose ExpresSense, a lightweight application for standalone smartphones that relies on near-ultrasound acoustic signals for detecting users’ facial expressions. ExpresSense has been tested on different users in lab-scaled and large-scale studies for both posed as well as natural expressions. By achieving a classification accuracy of ≈ 75% over various basic expressions, we discuss the potential of a standalone smartphone to sense expressions through acoustic sensing.

著者
Pragma Kar
Jadavpur University, Kolkata, West Bengal, India
Shyamvanshikumar Singh
IIT Kharagpur, Kharagpur, India
Avijit Mandal
IIT Kharagpur, Kharagpur, India
Samiran Chattopadhyay
Jadavpur University, Kolkata, West Bengal, India
Sandip Chakraborty
IIT Kharagpur, India, Kharagpur, West Bengal, India
論文URL

https://doi.org/10.1145/3544548.3581235

動画

会議: CHI 2023

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

セッション: Fabrication, Input, Sensing

Hall G1
6 件の発表
2023-04-25 23:30:00
2023-04-26 00:55:00