Recent advancements in the conversational and social capabilities of generative AI (GenAI) have sparked interest in its role as an agent capable of actively participating in human-AI group discussions. Despite this momentum, we don’t fully understand how GenAI shapes conversational dynamics or how the interface design impacts its influence on the group. In this paper, we introduce interface-driven social prominence as a design lens for collaborative GenAI systems. We then present a GenAI-based conversational agent that can actively engage in spoken dialogue during video calls and design three distinct collaboration modes that vary the social prominence of the agent by manipulating its presence in the shared space and the degree of control users have over its participation. A mixed-methods within-subjects study, in which 18 dyads engaged in realistic discussions with a GenAI agent, offers empirical insights into how communication patterns and the collective negotiation of GenAI's influence shift based on how it is embedded into the collaborative experience. Based on these findings, we outline design implications for supporting the coordination and critical engagement required in human-AI groups.
Navigating large-scale online discussions is difficult due to their rapid pace and high volume of content. Platforms like Reddit employ ``threads’’ to visually organize parallel discussions, but deep nesting obscures conversation flow. For moderators, this fragmentation compounds the difficulty of following evolving conversations and maintaining context across threads, which limits timely and effective moderation. In this paper, we present Needle, an interactive system that applies visual analytics to summarize key conversational metrics: activity, toxicity, and voting trends over time. Needle provides both high-level overviews and detailed breakdowns of threads, enabling moderators to identify priority areas without reading through entire nested conversations. Through a user study with ten Reddit moderators, we find that Needle provides a practical solution to maintain contextual understanding when navigating threaded discussions. Based on these findings, we propose design guidelines for future visualization-based tools that shape how people consume, interpret, and make sense of large-scale online discussions.
People visiting or moving to a new city often struggle to understand local vibes and everyday routines. Short-form videos on TikTok capture these local stories, but people still have to jump between chatbots, maps, and apps to turn them into concrete plans. We introduce PlaceWeave, a human-centered trip-planning system that foregrounds a place's ''localness''. PlaceWeave builds a place knowledge graph from TikTok videos and uses it to ground all AI features: the conversational assistant, localness attributes on the map, and the route planner all draw on graph evidence. The interface combines an interactive map, an evidence-backed Insights Panel, and tools for organizing discoveries and composing itineraries in a single linked workspace. We validate the attributes and run a within-subjects study with 18 participants, comparing PlaceWeave to a baseline using separate chat, map, video, and canvas tools. PlaceWeave helps people create more local-feeling plans, better understand neighborhood character and trade-offs, and avoid fragmented workflows. We show how localness-aware, graph-grounded AI can support more community-sensitive placemaking technologies.
Online comments significantly influence users' judgments, yet their presentation, often determined by platform algorithms, can introduce biases, such as anchoring effects, which distort reasoning. While existing research emphasizes mitigating individual cognitive biases, the evolution of user judgments during comment engagement remains overlooked. This study investigates how presentation cues impact reasoning and explores interface design strategies to mitigate bias. Through a preliminary experiment (N=18) and a co-design workshop, we identified key challenges users face across a four-stage process and distilled four design requirements: pre-engagement framing, interactive organization, reflective prompts, and synthesis support. Based on these insights, we developed CommSense, an on-the-fly plugin that enhances user engagement with online comments by providing visual overviews and lightweight prompts to guide reasoning. A between-subject evaluation (N=24) demonstrates that CommSense improves bias awareness and reflective thinking, helping users produce more comprehensive, evidence-based rationales while maintaining high usability.
Developing AI literacy is increasingly urgent as generative AI reshapes creative practice. Yet most AI literacy frameworks are top-down and expert-driven, overlooking how literacy emerges organically in creative communities. To address this gap, we performed a large-scale analysis of 122k Reddit conversations from 80 creative-oriented subreddits over a time period of three years. Our analysis identified four consistent themes in AI literacy-related discussions, and we further traced how discourse shifted alongside major AI events. Surprisingly, creators primarily frame AI literacy around how to use tools effectively—foregrounding practice and task skills—while discussions of AI capabilities and ethics surge only around high-profile events. Our findings suggest that AI literacy is dynamic, practice-driven, and event-responsive rather than static or purely conceptual. This study provides insights for researchers, designers, and policymakers to develop learning resources, community support, and policies that better promote AI literacy in creative communities.
Asynchronous online discussions enable diverse participants to co-construct knowledge beyond individual contributions. This process ideally evolves through sequential phases, from superficial information exchange to deeper synthesis. However, many discussions stagnate in the early stages. Existing AI interventions typically target isolated phases, lacking mechanisms to progressively advance knowledge co-construction, and the impacts of different intervention styles in this context remain unclear and warrant investigation. To address these gaps, we conducted a design workshop to explore AI intervention strategies (task-oriented and/or relationship-oriented) throughout the knowledge co-construction process, and implemented them in an LLM-powered agent capable of facilitating progression while consolidating foundations at each phase. A within-subject study (N=60) involving five consecutive asynchronous discussions showed that the agent consistently promoted deeper knowledge progression, with different styles exerting distinct effects on both content and experience. These findings provide actionable guidance for designing adaptive AI agents that sustain more constructive online discussions.
Private messaging platforms hinder public oversight, making misinformation hard to counter. Meanwhile, platforms are pivoting to crowdsourced verification amid waning trust in institutional fact-checkers. This raises a critical question: how do peer corrections compare with local journalists or fact-checking tiplines? We tested this via a privacy-preserving randomized field study on participants' real WhatsApp group messages in India, complemented by interviews. Fact-checks from a close contact significantly improved accuracy over the control group, while corrections from the local journalist and national tipline did not reach statistical significance. However, none of the interventions improved participants' ability to identify novel misinformation on similar themes, suggesting corrections on WhatsApp are context-bound rather than skill-building. We contribute: (1) the first ecologically valid randomized test of peer-led fact-checking on WhatsApp, benchmarked against journalists and tiplines; (2) an empirical account of how participants make sense of corrections in closed messaging environments; and (3) design implications for community-based fact-checking, including training high-social-capital individuals as embedded verifiers.