この勉強会は終了しました。ご参加ありがとうございました。
When modeling passive data to infer individual mental wellbeing, a common source of ground truth is self-reports. But these tend to represent the psychological facet of mental states, which might not align with the physiological facet of that state. Our paper demonstrates that when what people ``feel'' differs from what people ``say they feel'', we witness a semantic gap that limits predictions. We show that predicting mental wellbeing with passive data (offline sensors or online social media) is related to how the ground-truth is measured (objective arousal or self-report). Features with psycho-social signals (e.g., language) were better at predicting self-reported anxiety and stress. Conversely, features with behavioral signals (e.g., sleep), were better at predicting stressful arousal. Regardless of the source of ground truth, integrating both signals boosted prediction. To reduce the semantic gap, we provide recommendations to evaluate ground truth measures and adopt parsimonious sensing.
Scientists and journalists strive to report numbers with high precision to keep readers well-informed. Our work investigates whether this practice can backfire due to the cognitive costs of processing multi-digit precise numbers. In a pre-registered randomized experiment, we presented readers with several news stories containing numbers in either precise or round versions. We then measured their ability to approximately recall these numbers and make estimates based on what they read. Our results revealed a counter-intuitive effect where reading round numbers helped people better approximate the precise values, while seeing precise numbers made them worse. We also conducted two surveys to elicit individual preferences for the ideal degree of rounding for numbers spanning seven orders of magnitude in various contexts. From the surveys, we found that people tended to prefer more precision when the rounding options contained only digits (e.g., "2,500,000") than when they contained modifier terms (e.g., "2.5 million"). We conclude with a discussion of how these findings can be leveraged to enhance numeracy in digital content consumption.
The effective use of assistive interfaces (i.e. those that offer suggestions or reform the user's input to match inferred intentions) depends on users making good decisions about whether and when to engage or ignore assistive features. However, prior work from economics and psychology shows systematic decision-making biases in which people overreact to low probability events and underreact to high probability events -- modelled using a probability weighting function. We examine the theoretical implications of this probability weighting for interaction, including its suggestion that users will overuse inaccurate interface assistance and underuse accurate assistance. We then conduct a new analysis of data from a previously published study, quantifying the degree of bias users exhibited, and demonstrating conformance with these predictions. We discuss implications for design, including strategies that could be used to mitigate the deleterious effects of the observed biases.
The philosophical construct readiness-to-hand describes focused, intuitive, tool use, and has been linked to tool-embodiment and immersion. The construct has been influential in HCI and design for decades, but researchers currently lack appropriate measures and tools to investigate it empirically. To support such empirical work we investigate the possibility of operationalising readiness-to-hand in measurements of multfractality in movement, building on recent work in cognitive science. We conduct two experiments (N=44, N=30) investigating multifractality in mouse movements during a computer game, replicating prior results and contributing new findings. Our results show that multifractality correlates with dimensions associated with readiness-to-hand, including skill and task-engagement, during tool breakdown, task learning and normal play. We describe future possibilities for the application of these methods in HCI, supporting such work by sharing scripts and data, and introducing a new data-driven approach to parameter selection.
Typing on mobile devices is a common and complex task. The act of typing itself thereby encodes rich information, such as the typing method, the context it is performed in, and individual traits of the person typing. Researchers are increasingly using a selection or combination of experience sampling and passive sensing methods in real-world settings to examine typing behaviours. However, there is limited understanding of the effects these methods have on measures of input speed, typing behaviours, compliance, perceived trust and privacy. In this paper, we investigate the tradeoffs of everyday data collection methods. We contribute empirical results from a four-week field study (N=26). Here, participants contributed by transcribing, composing, passively having sentences analyzed and reflecting on their contributions. We present a tradeoff analysis of these data collection methods, discuss their impact on text-entry applications, and contribute a flexible research platform for in the wild text-entry studies.