Collecting mobile screen information datasets remains challenging for academic researchers. Commercial organizations often have exclusive access to mobile data, leading to a “data monopoly” that restricts academic research and user transparency. Existing open-source mobile data collection frameworks primarily focus on mobile sensing data rather than screen content. We present Crepe, a no-code Android app that enables researchers to collect information displayed on screen through simple demonstrations of target data. Crepe utilizes a novel Graph Query technique, which augments mobile UI structures to support flexible identification, location, and collection of specific data pieces. The tool emphasizes participants' privacy and agency by providing full transparency over collected data and allowing easy opt-out. We designed and built Crepe for research purposes only and in scenarios where researchers obtain explicit consent from participants. Code for Crepe will be open-sourced to support future academic research data collection.
Cognitive biases often shape human decisions. While large language models (LLMs) have been shown to reproduce well-known biases, a more critical question is whether LLMs can predict biases at the individual level and emulate the dynamics of biased human behavior when contextual factors, such as cognitive load, interact with these biases. We adapted three well-established decision scenarios into a conversational setting and conducted a human experiment (N=1100). Participants engaged with a chatbot that facilitates decision-making through simple or complex dialogues. Results revealed robust biases. To evaluate how LLMs emulate human decision-making under similar interactive conditions, we used participant demographics and dialogue transcripts to simulate these conditions with LLMs based on GPT-4 and GPT-5. The LLMs reproduced human biases with precision. We found notable differences between models in how they aligned human behavior. This has important implications for designing and evaluating adaptive, bias-aware LLM-based AI systems in interactive contexts.
Animation production workflows often involve digital colorization of line art, where small unpainted regions (``gaps'') frequently occur and remain an underexplored challenge. We conducted a formative study in Japanese animation (anime) pipelines and found that while the paint bucket tool is widely used for base coloring, tiny enclosed areas are frequently overlooked, resulting in time-consuming manual detection and filling. We introduce GapFill, a tool grounded in professional practices that reduces the effort of gap detection, zooming, and color selection. Our deep-learning method suggests appropriate fill colors by referencing surrounding regions, leveraging the flat-color nature of anime-style images. In a user study with 13 professional colorists, our system improved performance and usability in gap-filling tasks over conventional methods. The study also suggested that prediction accuracy alone is not the primary factor for usability, that appropriate colors can be contextually ambiguous, and that GapFill can complement existing tools depending on users' trust in new AI-powered assistance.
In group ideation, whether participants should ideate collaboratively or individually remains controversial. Collaborative ideation enables synergy, whereby creativity is stimulated through inspiration from others’ ideas; yet it also introduces evaluation apprehension, which can inhibit creativity due to fear of judgment. In contrast, solitary ideation mitigates evaluation apprehension but cannot foster synergy. Existing hybrid approaches have attempted to alternate between the two modes to balance their strengths, but it remains underexplored how to simultaneously integrate the advantages of both settings. Therefore, we developed GraftMind, a system that enables users to ideate in private workspaces while an AI mediator proactively leverages collective ideas to provide real-time ideation assistance. The results of a user study demonstrate that GraftMind enhances group ideation performance; it not only enables synergy but also alleviates evaluation apprehension. Our findings underscore the potential of this novel group ideation setting.
Cross-language collaborative storytelling plays a vital role in children's language learning and cultural development, fostering both expressive ability and intercultural awareness. Yet, in practice, children's participation is often shallow, and facilitating such sessions places heavy cognitive and organizational burdens on coordinators, who must coordinate language support, maintain children's engagement, and navigate cultural differences. To address these challenges, we conducted a formative study with coordinators to identify their needs and pain points, which guided the design of SparkTales, an intelligent support system for cross-language collaborative storytelling. SparkTales leverages both individual and common characteristics of participating children to provide coordinators with story frameworks, diverse questions, and comprehension-oriented materials, aiming to reduce coordinators' workload while enhancing children's interactive engagement. Evaluation results show that SparkTales not only significantly increases coordinators' efficiency and quality of guidance but also improves children's participation, providing valuable insights for the design of future intelligent systems supporting cross-language collaboration.
The recent advancement of AI has shifted terminology: humans "use" computers but "collaborate with" AI. This anthropomorphic framing shapes expectations of system capabilities. Despite the large body of research adopting "human-AI collaboration" as a term, there seems to be little consensus on a definition of the concept at a glance. To address this potential gap and to provide a comprehensive overview of existing related literature, we first conducted a thematic analysis on human-human collaboration literature (n=60) to extract definitional components and associated concepts. Second, we analyzed publications on human-AI collaboration (n=299) using OpenAI’s GPT4o mini and o3 mini models, mapping the identified concepts to the AI context to examine the extent to which these concepts of collaboration are represented there. Our findings provide a shared conceptual foundation to support interdisciplinary research and suggest future research directions. Additionally, they inform the design of human-AI interfaces and interaction processes, bridging theory and practice.
While Generative AI (GenAI) systems are designed primarily for individual use, it is increasingly integrated into collaborative work. Its impact, however, on collaboration dynamics, such as information flow, role negotiation, and decision-making, remains unclear. To investigate this, we conducted a qualitative study comprising observations and semi-structured interviews with a total of 27 higher education students through the lens of distributed cognition. Our findings show that in synchronous settings, shared use of GenAI supported transparency and mutual awareness, with the interaction space functioning as attentional anchors, shared memory, and negotiable contributions to group decisions. Instead, in asynchronous teamwork, GenAI was typically used individually, with outputs later introduced into discussions, reducing opportunities for negotiation. As such, we contribute empirical evidence on GenAI's influence on collaborative dynamics and design considerations that position GenAI-Supported Cooperative Work (GSCW) as a bridge between Human–AI Interaction and CSCW.