Modern knowledge work is increasingly collaborative, especially in information-intensive domains such as crisis response, scientific discovery, and software engineering. Software engineering epitomizes these trends through practices like pair programming and collaborative debugging. Yet existing computational models of information foraging remain individual-centric, leaving teams without support for social foraging-leveraging partners’ actions and communication to navigate complex projects. We introduce PFIS-T, a predictive computational model of social information foraging. Building on the PFIS model family, it integrates implicit cues from teammates’ recent navigation and explicit cues from synchronous communication to predict a programmer’s next action. We evaluated PFIS-T with ten three-person debugging teams, finding that it substantially outperforms the strongest individual baseline, PFIS3, predicting 81.5% of navigations and improving accuracy by 16.7%. These results show how predictive models can operationalize social foraging and point to opportunities for collaborative IDEs and interactive systems that adaptively surface social trails to improve coordination and awareness.
Post-surgery care involves ongoing collaboration between provider teams and patients, which starts from post-surgery hospitalization through home recovery after discharge. While prior HCI research has primarily examined patients’ challenges at home, less is known about how provider teams coordinate discharge preparation and care handoffs, and how breakdowns in communication and care pathways may affect patient recovery. To investigate this gap, we conducted semi-structured interviews with 13 healthcare providers and 4 patients in the context of gastrointestinal (GI) surgery. We found coordination boundaries between in- and out-patient teams, coupled with complex organizational structures within teams, impeded the “invisible work” of preparing patients’ home care plans and triaging patient information. For patients, these breakdowns resulted in inadequate preparation for home transition and fragmented self-collected data, both of which undermine timely clinical decision-making. Based on these findings, we outline design opportunities to formalize task ownership and handoffs, contextualize co-temporal signals, and align care plans with home resources.
Cross-functional teams struggle when static collaboration tools fail to keep pace with dynamic conversations. Through a formative study with seven professionals, we identified a critical gap: designers and developers speak different vocabularies, causing semantic misalignments. We present Cognitive Bridge, an AI system that monitors multimodal cues (facial expressions, speech, workspace activity) to detect emerging misunderstandings, then generates adaptive boundary objects, visual diagrams, wireframes, and flowcharts that translate between professional perspectives in real-time. Our controlled study with 16 designer-developer dyads found that Cognitive Bridge reduced communication conflicts by 47% and increased implementable solutions by 34% compared to baseline tools. However, analysis revealed a solution-exploration tradeoff: while AI accelerated alignment, it risked premature convergence that constrained creative exploration. We contribute: (1) a novel system for AI-generated boundary objects, and (2) design implications for balancing cognitive scaffolding with creative agency preservation.
Artificially intelligent agents are increasingly moving beyond decision-support roles to become teammates, creating novel team configurations beyond traditional human-AI dyads. One such configuration is a hierarchical team, where a human leader directs both human and agent subordinates. This raises key questions about managing mixed-identity subordinates and about how agent traits (ability/integrity) shape trust. We present a lab study with teams of four (one human leader, with one human and two agent subordinates) performing a collaborative block-moving task. Leaders interacted with three types of agents that varied in ability and integrity: High-Integrity-High-Ability (HI-HA), High-Integrity-Low-Ability (HI-LA), and Low-Integrity-High-Ability (LI-HA). Leaders generally preferred and maintained stable trust in humans, whereas trust in agents declined significantly under both low-ability and low-integrity conditions, with stronger sensitivity to integrity. Thematic analysis revealed distinct expectations tied to identity: leaders granted humans an inherent baseline of trust due to humans' adaptability, while evaluating agents primarily on task efficiency and obedience.
Artificial intelligence (AI) is increasingly deployed in high-stakes domains such as search-and-rescue (SAR), where detections or classifications can shape how teams share information, build trust, and make time-critical decisions. This paper investigates how teams of SAR professionals incorporate AI into their teamwork, highlighting both benefits and challenges. To support this study, we developed the Council of Wizards, a multi-agent Wizard-of-Oz technique that simulates distributed AI systems, enabling scalable and controlled evaluation of collaborative dynamics. Using this novel method, we conducted an experiment with 24 subject-matter experts (SMEs) who reviewed SAR video footage as small teams and made group decisions, with or without AI support. Quantitative results showed that AI-assisted teams reached consensus faster than controls. Qualitative feedback revealed how participants interpreted trust cues, adapted strategies, and sometimes struggled with overload or conflicting detections. Findings illustrate how AI shapes teamwork in SAR and provide design implications for trustworthy distributed human-AI interactions.
Interdisciplinary teams developing complex technologies such as healthtech struggle to align disciplinary perspectives, stakeholder priorities, and evolving problem framings, particularly during rapid iteration, when existing collaboration tools offer limited support for in-session negotiation. We present KNIT, an AI-mediated framework that conceptualises AI-generated artefacts as computational boundary objects. KNIT supports convergence by externalising anonymised individual inputs into shared artefacts, including semantic clusters and stakeholder-centred problem reframings, that surface differences in interpretation and make them available for negotiation. We evaluated KNIT in workshops with seven early-stage healthtech teams (28 participants), analysing 190 interaction episodes using Carlile’s 3T framework. KNIT supported knowledge boundary crossing across syntactic (95.0%), semantic (86.3%), and pragmatic (84.8%) levels. We contribute empirical evidence and design principles showing how computational boundary objects mediate distinct boundary-crossing mechanisms, demonstrating that representational transformation rather than automation is the primary mechanism through which AI enables convergence across disciplinary boundaries.
While conversational agents increasingly mediate teamwork, prior work has mainly focused on when, what, or to whom an intervention is directed, with little attention to where mediation occurs. Therefore, we introduce SeeSawBot, an LLM-driven chatbot that operates across private DMs and public channels. Following a formative study, we deployed SeeSawBot in student Slack teams as a technology probe for eight weeks, collecting bi-weekly reflection surveys and post-deployment interviews. Findings show that cross-space mediation fostered sense-making across private and public spaces and redistributed emotional labor through interventions that played different relational roles over team development. We discuss cross-space mediation as both a boundary object and boundary actor, and argue that future evaluation frameworks should capture relational agency by attending to the back-and-forth negotiations through which groups construct collective understanding. We conclude with design implications that foreground where as a variable for future computational mediators, a seesaw of agency and autonomy.