The proliferation of robots in public spaces necessitates a deeper understanding of how these robots can interact with those they share the space with. In this paper, we present findings from video analysis of publicly deployed cleaning robots in a transit space—a major commercial airport, using their navigational troubles as a tool to document what robots currently lack in interactional competence. We demonstrate that these robots, while technically proficient, can disrupt the social order of a space due to their inability to understand core aspects of human movement: mutual adjustment to others, the significance of understanding social groups, and the purpose of different locations. In discussion we argue for exploring a new design space of movement: socially-aware movement. By developing strong concepts that treat movement as an interactional and collaborative accomplishment, we can create systems that better integrate into the everyday rhythms of public life.
Existing research has examined how artificial teammates influence collaboration within teams, but far less is known about their role in shaping interactions between teams. In particular, it remains unclear how transparent integration of AI teammates influences intergroup biases in competitive contexts. To investigate this, we designed StarHarvest, an online game where two hybrid teams (each consisting of one human and one bot, either concealed or revealed) competed for resources while bots elicited prosocial or antisocial behaviors. Drawing on data from 240 participants, we analyzed behavioral choices, evaluations, and resource allocations toward ingroup and outgroup members. Our findings show that hidden bots fostered stronger within-team coordination but also allowed asymmetric retribution toward weaker opponents. By contrast, revealed bots were treated as secondary teammates, reducing cohesion and shifting responsibility onto human partners. We conclude with design implications for socially responsible integration of artificial teammates, highlighting tensions between group-level and agent-level identities.
Large Language Models (LLMs) aim to mimic a natural form of human conversation, likely contributing to an anthropomorphic perception of AI in contrast to conventional human-computer interfaces. Our study explores human-AI conversations and humans’ perception of their counterpart in a collaborative mystery solving task with Anthropic’s Claude 3.5 Sonnet v2 model. We collected self-report data on participants’ perception of the interaction, measured task performance, and analyzed conversational dynamics using LLM-based emotion coding. We found that humans’ perception of AI, ranging from that of a teammate or colleague to a tool, did not necessarily impact performance in mystery solving, but correlated with aspects of the interaction itself. When participants perceived the AI as a teammate or colleague, they felt a stronger sense of team cohesion and their conversations were more collaborative, with more positive emotions. These findings may help practitioners design human-AI interfaces that foster positive interactions without endangering performance.
AI VTubers, where the performer is not human but algorithmically generated, introduce a new context for fandom. While human VTubers have been substantially studied for their cultural appeal, parasocial dynamics, and community economies, little is known about how audiences engage with their AI counterparts. To address this gap, we present a qualitative study of Neuro-sama, the most prominent AI VTuber. Our findings show that engagement is anchored in active co-creation: audiences are drawn by the AI's unpredictable yet entertaining interactions, cement loyalty through collective emotional events that trigger anthropomorphic projection, and sustain attachment via the AI's consistent persona. Financial support emerges not as a reward for performance but as a participatory mechanism for shaping livestream content, establishing a resilient fan economy built on ongoing interaction. These dynamics reveal how AI Vtuber fandom reshapes fan–creator relationships and offer implications for designing transparent and sustainable AI-mediated communities.
Online communities often develop shared symbolic vocabularies that strengthen insider bonds but implicitly marginalize newcomers. On Chinese platforms, this dynamic is exemplified by “absurd language,” a style distinguished by irony, exaggeration, and local memes. While this form of expression fosters in-group intimacy, it creates significant cultural barriers for “Sino-digital non-natives.” This study investigates how AI can mediate cultural integration beyond mere translation. We developed an AI mediator integrating Chain-of-Thought (CoT) and Retrieval-Augmented Generation (RAG) to scaffold this journey. A mixed-methods evaluation (N=14) demonstrates significant improvements in comprehension accuracy over a baseline LLM. Crucially, our qualitative analysis reveals a novel five-stage model of cultural integration. This model charts the user's journey from peripheral observation to confident participation, detailing the AI's evolving role from “expert guide” to “creative collaborator.” Our findings illuminate the dynamics of agency and trust, offering a framework for designing AI as a catalyst for community integration.
Collaborative problem-solving under time pressure is common but difficult, as teams must generate ideas quickly, coordinate actions, and track progress. Generative AI offers new opportunities to assist, but we know little about how proactive agents affect the dynamics of real-time, co-located teamwork. We studied two forms of proactive support in digital escape rooms: a facilitator agent that offered summaries and group structures, and a peer agent that proposed ideas and answered queries. In a within-subjects study with 24 participants, we compared group performance and processes across three conditions: no AI, peer, and facilitator. Results show that the peer agent occasionally enhanced problem-solving by offering timely hints and memory support; however, it also disrupted flow, increased workload, and created over-reliance. In comparison, the facilitator agent provided light scaffolding but had a limited impact on outcomes. We provide design considerations for proactive generative AI agents based on our findings.