Script&Shift: A Layered Interface Paradigm for Integrating Content Development and Rhetorical Strategy with LLM Writing Assistants
説明

Good writing is a dynamic process of knowledge transformation, where writers refine and evolve ideas through planning, translating, and reviewing. Generative AI-powered writing tools can enhance this process but may also disrupt the natural flow of writing, such as when using LLMs for complex tasks like restructuring content across different sections or creating smooth transitions. We introduce Script&Shift, a layered interface paradigm designed to minimize these disruptions by aligning writing intents with LLM capabilities to support diverse content development and rhetorical strategies. By bridging envisioning, semantic, and articulatory distances, Script&Shift interactions allow writers to leverage LLMs for various content development tasks (scripting) and experiment with diverse organization strategies while tailoring their writing for different audiences (shifting). This approach preserves creative control while encouraging divergent and iterative writing. Our evaluation shows that Script&Shift enables writers to creatively and efficiently incorporate LLMs while preserving a natural flow of composition.

日本語まとめ
読み込み中…
読み込み中…
"When Two Wrongs Don't Make a Right" - Examining Confirmation Bias and the Role of Time Pressure During Human-AI Collaboration in Computational Pathology
説明

Artificial intelligence (AI)-based decision support systems hold promise for enhancing diagnostic accuracy and efficiency in computational pathology. However, human-AI collaboration can introduce and amplify cognitive biases, like confirmation bias caused by false confirmation when erroneous human opinions are reinforced by inaccurate AI output. This bias may increase under time pressure, a ubiquitous factor in routine pathology, as it strains practitioners' cognitive resources. We quantified confirmation bias triggered by AI-induced false confirmation and examined the role of time constraints in a web-based experiment, where trained pathology experts (n=28) estimated tumor cell percentages. Our results suggest that AI integration fuels confirmation bias, evidenced by a statistically significant positive linear-mixed-effects model coefficient linking AI recommendations mirroring flawed human judgment and alignment with system advice. Conversely, time pressure appeared to weaken this relationship. These findings highlight potential risks of AI in healthcare and aim to support the safe integration of clinical decision support systems.

日本語まとめ
読み込み中…
読み込み中…
How the Role of Generative AI Shapes Perceptions of Value in Human-AI Collaborative Work
説明

As artificial intelligence (AI) continues to transform the modern workplace, generative AI (GenAI) has emerged as a prominent tool capable of augmenting work processes. Defined by its ability to create or modify content, GenAI differs significantly from traditional machine learning models that classify, recognize, or predict patterns from existing data. This study explores the role of GenAI in shaping perceptions of AI’s contribution and how these perceptions influence both creators’ internal assessments of their work and their anticipation of external evaluators’ assessments. Our research develops and empirically tests a structural model through a between-subjects experiment, revealing that the role GenAI plays in the work process significantly impacts perceived enhancements in work quality and effort relative to human input. Additionally, we identify a critical trade-off between fostering worker assessments of creativity and managing perceived external assessments of the work’s value.

日本語まとめ
読み込み中…
読み込み中…
Supporting Co-Adaptive Machine Teaching through Human Concept Learning and Cognitive Theories
説明

An important challenge in interactive machine learning, particularly in subjective or ambiguous domains, is fostering bi-directional alignment between humans and models. Users teach models their concept definition through data labeling, while refining their own understandings throughout the process. To facilitate this, we introduce MOCHA, an interactive machine learning tool informed by two theories of human concept learning and cognition. First, it utilizes a neuro-symbolic pipeline to support Variation Theory-based counterfactual data generation. By asking users to annotate counterexamples that are syntactically and semantically similar to already-annotated data but predicted to have different labels, the system can learn more effectively while helping users understand the model and reflect on their own label definitions. Second, MOCHA uses Structural Alignment Theory to present groups of counterexamples, helping users comprehend alignable differences between data items and annotate them in batch. We validated MOCHA's effectiveness and usability through a lab study with 18 participants.

日本語まとめ
読み込み中…
読み込み中…
Intent Tagging: Exploring Micro-Prompting Interactions for Supporting Granular Human-GenAI Co-Creation Workflows
説明

Despite Generative AI (GenAI) systems' potential for enhancing content creation, users often struggle to effectively integrate GenAI into their creative workflows. Core challenges include misalignment of AI-generated content with user intentions (intent elicitation and alignment), user uncertainty around how to best communicate their intents to the AI system (prompt formulation), and insufficient flexibility of AI systems to support diverse creative workflows (workflow flexibility). Motivated by these challenges, we created IntentTagger: a system for slide creation based on the notion of Intent Tags—small, atomic conceptual units that encapsulate user intent—for exploring granular and non-linear micro-prompting interactions for Human-GenAI co-creation workflows. Our user study with 12 participants provides insights into the value of flexibly expressing intent across varying levels of ambiguity, meta-intent elicitation, and the benefits and challenges of intent tag-driven workflows. We conclude by discussing the broader implications of our findings and design considerations for GenAI-supported content creation workflows.

日本語まとめ
読み込み中…
読み込み中…
A Matter of Perspective(s): Contrasting Human and LLM Argumentation in Subjective Decision-Making on Subtle Sexism
説明

In subjective decision-making, where decisions are based on contextual interpretation, Large Language Models (LLMs) can be integrated to present users with additional rationales to consider. The diversity of these rationales is mediated by the ability to consider the perspectives of different social actors; however, it remains unclear whether and how models differ in the distribution of perspectives they provide. We compare the perspectives taken by humans and different LLMs when assessing subtle sexism scenarios. We show that these perspectives can be classified within a finite set (perpetrator, victim, decision-maker), consistently present in argumentations produced by humans and LLMs, but in different distributions and combinations, demonstrating differences and similarities with human responses, and between models. We argue for the need to systematically evaluate LLMs’ perspective-taking to identify the most suitable models for a given decision-making task. We discuss the implications for model evaluation.

日本語まとめ
読み込み中…
読み込み中…
“Housing Diversity Means Diverse Housing”: Blending Generative AI into Speculative Design in Rural Co-Housing Communities
説明

In response to various environmental and societal challenges, co-housing has emerged to support social cohesion, grassroots innovation and ecological regeneration. Co-housing communities typically have smaller personal spaces, closer neighbourly relationships, and engage in more mutually supportive sustainable practices. To understand such communities’ motivations and visions, we developed a speculative design tool that harnesses Generative Artificial Intelligence (GenAI) to facilitate the envisioning of alternative future scenarios that challenge prevailing values, beliefs, lifestyles, and ways of knowing in contemporary society. Within the context of co-housing communities, we conducted a participatory design study with participants in co-creating their future communities. This paper unpacks implications and also reflects on the co-design approach employing GenAI. Our main findings highlight that GenAI, as a catalyst for imagination, empowers individuals to create visualisations that pose questions through a plural and situated speculative discourse.

日本語まとめ
読み込み中…
読み込み中…