The Experience Sampling Method (ESM) is widely used to collect emotion self-reports to train machine learning models for emotion inference. However, as ESM studies are time-consuming and burdensome, participants often withdraw in between. This unplanned withdrawal compels the researchers to discard the dropout participants’ data, significantly impacting the quality and quantity of the self-reports. To address this problem, we leverage only the self-reporting similarity across participants (unlike prior works that apply different machine learning approaches on additional modalities) for missing self-report estimation. In specific, we propose a Multi-task Learning (MTL) framework, MUSE, that constructs the missing self-reports of the dropout participants. We evaluate MUSE in two in-the-wild studies (N1=24, N2=30) of 6-week and 8-week duration, during which the participants reported four emotions (happy, sad, stressed, relaxed) using a smartphone application. The evaluation reveals that MUSE estimates the missing emotion self-reports with an average AUCROC of 84% (Study I) and 82% (Study II). A follow-up evaluation of MUSE for an emotion inference (downstream) task reveals no significant difference in emotion inference performance when estimated self-reports are used. These findings underscore the utility of MUSE in estimating missing self-reports in ESM studies and the applicability of MUSE for downstream tasks (e.g., emotion inference).
https://doi.org/10.1145/3613904.3642833
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)