When collaborating with artificial intelligence (AI), humans can often delegate tasks to leverage complementary AI competencies. However, humans often delegate inefficiently. Enabling humans with knowledge about AI can potentially improve inefficient AI delegation. We conducted a between-subjects experiment (two groups, n = 111) to examine how enabling humans with AI knowledge can improve AI delegation in human-AI collaboration. We find that AI knowledge-enabled humans align their delegation decisions more closely with their assessment of how suitable a task is for humans or AI (i.e., task appraisal). We show that delegation decisions closely aligned with task appraisal increase task performance. However, we also find that AI knowledge lowers future intentions to use AI, suggesting that AI knowledge is not strictly positive for human-AI collaboration. Our study contributes to HCI design guidelines with a new perspective on AI features, educating humans regarding general AI functioning and their own (human) performance and biases.
https://doi.org/10.1145/3544548.3580794
Cognitive biases have been shown to play a critical role in creating echo chambers and spreading misinformation. They undermine our ability to evaluate information and can influence our behaviour without our awareness. To allow the study of occurrences and effects of biases on information consumption behaviour, we explore indicators for cognitive biases in physiological and interaction data. Therefore, we conducted two experiments investigating how people experience statements that are congruent or divergent from their own ideological stance. We collected interaction data, eye tracking data, hemodynamic responses, and electrodermal activity while participants were exposed to ideologically tainted statements. Our results indicate that people spend more time processing statements that are incongruent with their own opinion. We detected differences in blood oxygenation levels between congruent and divergent opinions, a first step towards building systems to detect and quantify cognitive biases.
https://doi.org/10.1145/3544548.3580917
Digital companions shall augment complex human processes like extensive decision-making. However, their acceptance may depend upon their ability to adapt to individuals’ psychological states and preferred decision strategies. Regulatory Mode Theory divides human self-regulation into assessment (i.e., making comparisons) and locomotion (i.e., movement from state to state). These regulatory modes are more or less compatible with different decision strategies. In an experimental study (N=81, 2x2-between-subjects design) we explored whether digital companions can gain higher acceptance by considering these compatibilities. Participants were confronted with a decision task. The assisting digital companion first induced a regulatory mode (assessment vs. locomotion) and subsequently presented information according to one of two decision strategies (full evaluation vs. progressive elimination). We show that a fit between regulatory mode and decision strategy (assessment/full evaluation or locomotion/progressive elimination) leads to a favorable evaluation of decisions and the digital companion. No differences regarding decision accuracy and speed were observed.
https://doi.org/10.1145/3544548.3581529
Artificial intelligence (AI) applications have become an integral part of our society. However, studying AI as one entity or studying idiosyncratic applications separately both have limitations. Thus, this study used computational methods to categorize ten different AI roles prevalent in our everyday life and compared laypeople’s perceptions of them using online survey data (N = 727). Based on theoretical factors related to the fundamental nature of AI, the principal component analysis revealed two dimensions that categorize AI: human involvement and AI autonomy. K-means clustering identified four AI role clusters: tools (low in both dimensions), servants (high human involvement and low AI autonomy), assistants (low human involvement and high AI autonomy), and mediators (high in both dimensions). Multivariate analyses of covariances revealed that people assessed AI mediators the most and AI tools the least favorably. Demographics also influenced laypeople’s assessments of AI. The implications of these results are discussed.
https://doi.org/10.1145/3544548.3581340
Social media platforms use short, highly engaging videos to catch users' attention. While the short-form video feeds popularized by TikTok are rapidly spreading to other platforms, we do not yet understand their impact on cognitive functions. We conducted a between-subjects experiment (N=60) investigating the impact of engaging with TikTok, Twitter, and YouTube while performing a Prospective Memory task (i.e., executing a previously planned action). The study required participants to remember intentions over interruptions. We found that the TikTok condition significantly degraded the users’ performance in this task. As none of the other conditions (Twitter, YouTube, no activity) had a similar effect, our results indicate that the combination of short videos and rapid context-switching impairs intention recall and execution. We contribute a quantified understanding of the effect of social media feed format on Prospective Memory and outline consequences for media technology designers to not harm the users’ memory and wellbeing.
https://doi.org/10.1145/3544548.3580778
In this paper, we introduce an AI-mediated framework that can provide intelligent feedback to augment human cognition. Specifically, we leverage deep reinforcement learning (DRL) to provide adaptive time pressure feedback to improve user performance in a math arithmetic task. Time pressure feedback could either improve or deteriorate user performance by regulating user attention and anxiety. Adaptive time pressure feedback controlled by a DRL policy according to users' real-time performance could potentially solve this trade-off problem. However, the DRL training and hyperparameter tuning may require large amounts of data and iterative user studies. Therefore, we propose a dual-DRL framework that trains a regulation DRL agent to regulate user performance by interacting with another simulation DRL agent that mimics user cognition behaviors from an existing dataset. Our user study demonstrates the feasibility and effectiveness of the dual-DRL framework in augmenting user performance, in comparison to the baseline group.
https://doi.org/10.1145/3544548.3580905