Framing Helper Therapy to Support User Engagement: Causal Evidence from a Public Deployment of a Mental Health Support Text Messaging Program
説明

Digital peer-to-peer mental health tools have shown promise in supporting the well-being of those receiving help and giving it (i.e. helper therapy), but promoting engagement remains a challenge. We examine whether the framing of helper therapy exercises motivates active user participation and how user characteristics shape differential effects of the framings in a publicly deployed interactive text messaging-based mental health program. Among 3,817 users randomized to different helper therapy framings, we find causal evidence that framings which emphasize helpng oneself increase written engagement rates as much as 4.6% over other framings, with even larger effects seen among minoritized identities. These self-focused framings also elicited messages with more positive, trust, and anticipation-related words and fewer fear, anger, disgust, and sadness words. Our findings highlight the importance of centering the user in the framing of digital intervention content, and personalizing digital mental health tools to align with a diversity of user identities.

日本語まとめ
読み込み中…
読み込み中…
Interaction Methods in Generative AI Image Tools: A Review of Trends and Design Opportunities Across HCI and Industry
説明

Generative AI (GenAI) image tools are increasingly integrated into design workflows, prompting HCI research on their interaction methods and interfaces. We reviewed 37 such tools, including 28 HCI research systems and nine commercial systems (2022--July 2025), using three analytical frameworks: interaction methods, creative processes, and tool functionalities. We found that text prompts remain the dominant input method, while visual and attribute-based inputs---particularly in academic tools---are gaining traction and are often combined with text for refinement. Commercial systems emphasize parameter control, whereas academic tools focus on semantic attributes and visual organization. Most tools support ideation and exploration, but provide limited support for refinement and evaluation. Based on these findings, we identify nine design opportunities, including advanced visual interaction, simplified parameter control, precision editing, direct manipulation, workflow integration, default settings that support rapid exploration, and user guidance for later stages. We contribute a framework for analyzing GenAI interfaces and actionable directions for designing more usable, creativity-supportive GenAI image systems.

日本語まとめ
読み込み中…
読み込み中…
State Your Intention to Steer Your Attention: An AI Assistant for Intentional Digital Living
説明

When working on digital devices, people often face distractions that can lead to a decline in productivity and efficiency, as well as negative psychological and emotional impacts. To address this challenge, we introduce a novel Artificial Intelligence (AI) assistant that elicits a user's intention, assesses whether ongoing activities are in line with that intention, and provides gentle nudges when deviations occur. The system leverages a large language model to analyze screenshots, application titles, and URLs, issuing notifications when behavior diverges from the stated goal. Its detection accuracy is refined through initial clarification dialogues and continuous user feedback. In a three-week, within-subjects field deployment with 22 participants, we compared our assistant to both a rule-based intent reminder system and a passive baseline that only logged activity. Results indicate that our AI assistant effectively supports users in maintaining focus and aligning their digital behavior with their intentions. Our source code is publicly available at https://intentassistant.github.io

日本語まとめ
読み込み中…
読み込み中…
Co-Disclosing the Computer: LLM-Mediated Computing through Reflective Conversation
説明

Large language models (LLMs) are changing how we interact with computers. As they become capable of generating software dynamically, they invite a fundamental rethinking of the computer's role in human activity. In this conceptual paper, we introduce LLM-mediated computing: a paradigm in which interaction is no longer structured around fixed applications, but emerges in real-time through human intent and LLM interpretation. We make three contributions: (1) we articulate a new interaction metaphor of reflective conversation to guide future design, (2) we use the lens of postphenomenology to understand the human-LLM-computer relation, and (3) we propose a new mode of computing based on co-disclosure, in which the computer is constituted in use. Together, they define a new mode of computing, provide a lens to analyze it, and offer a metaphor to design with.

日本語まとめ
読み込み中…
読み込み中…
Who Does What? Archetypes of Roles Assigned to LLMs During Human-AI Decision-Making
説明

LLMs are increasingly supporting decision-making across high-stakes domains, requiring critical reflection on the socio-technical factors that shape how humans and LLMs are assigned roles and interact during human-in-the-loop decision-making. This paper introduces the concept of human-LLM archetypes -- defined as recurring socio-technical interaction patterns that structure the roles of humans and LLMs in collaborative decision-making. We describe 17 human-LLM archetypes derived from a scoping literature review and thematic analysis of 113 LLM-supported decision-making papers. Then, we evaluate these diverse archetypes across real-world clinical diagnostic cases to examine the potential effects of adopting distinct human-LLM archetypes on LLM outputs and decision outcomes. Finally, we present relevant tradeoffs and design choices across human-LLM archetypes, including decision control, social hierarchies, cognitive forcing strategies, and information requirements. Through our analysis, we show that selection of human-LLM interaction archetype can influence LLM outputs and decisions, bringing important risks and considerations for the designers of human-AI decision-making systems.

日本語まとめ
読み込み中…
読み込み中…
“Don’t Look, But I Know You Do”: Norms and Observer Effects in Shared LLM Accounts
説明

Account sharing is common in subscription services and is now extending to generative AI platforms, which are still primarily designed for individual use. Sharing often requires workarounds that create new tensions. This study examines how LLM subscriptions are shared and the norms that develop. We combined a survey of 245 users with interviews of 36 participants to understand both patterns and lived experiences. Our analysis identified four types of account sharing, organized along two dimensions: whether the owner uses the account and whether subscription costs are shared. Within these types, we examined how norms were formed and how their fragility, especially privacy, became evident in practice. Users, fully aware of this, subtly adjusted their behavior, which we interpret through the lens of the observer effect. We frame LLM account sharing as a social practice of appropriation and outline design implications to adapt single-user platforms to multi-user realities.

日本語まとめ
読み込み中…
読み込み中…
From Use to Oversight: How Mental Models Influence User Behavior and Output in AI Writing Assistants
説明

AI-based writing assistants are ubiquitous, yet little is known about how users’ mental models shape their use. We examine two types of mental models—functional or related to what the system does, and structural or related to how the system works—and how they

affect control behavior—how users request, accept, or edit AI suggestions as they write—and writing outcomes. We primed participants (𝑁 = 48) with different system descriptions to induce these mental models before asking them to complete a cover letter writing

task using a writing assistant that occasionally offered preconfigured ungrammatical suggestions to test whether the mental models affected participants’ critical oversight. We find that while participants in the structural mental model condition demonstrate a better

understanding of the system, this can have a backfiring effect: while these participants judged the system as more usable, they also produced letters with more grammatical errors, highlighting a complex relationship between system understanding, trust, and control in contexts that require user oversight of error-prone AI outputs.

日本語まとめ
読み込み中…
読み込み中…