Trust and Transparency in Everyday Life

会議の名前
CHI 2026
The Impacts of Transparency and Personalization on Feelings of Agency and Connection in Democratic Decision Making
要旨

Community engagement processes often shape policies that affect people’s daily lives, yet they frequently struggle to build transparency, understanding, and agency. Civic technologies aim to address this gap by making connections between voices and decisions visible, but rarely evaluate impact on democratic participants. This study examines the effects of varying levels and types of transparency, including personalization, in technology-enabled civic decision-making on perceptions of agency, vertical and horizontal transparency, and community connection. We conducted an experiment with 266 participants who advocated for a local skate park or tennis court, and then received a decision for or against their position under varying transparency conditions. Results show that increased transparency improved perceptions of agency, vertical transparency, and horizontal transparency, but personalization had limited effects. Qualitative reflections highlighted horizontal transparency as particularly valuable for opening perspectives and enhancing participant experience. We discuss key design implications for civic technologies.

著者
Margaret Hughes
Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
Cassandra Overney
Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
Mahmood Jasim
Louisiana State University, Baton Rouge, Louisiana, United States
Deb Roy
MIT, Cambridge, Massachusetts, United States
Authorship Drift: How Self-Efficacy and Trust Evolve During LLM-Assisted Writing
要旨

Large language models (LLMs) are increasingly used as collaborative partners in writing. However, this raises a critical challenge of authorship, as users and models jointly shape text across interaction turns. Understanding authorship in this context requires examining users’ evolving internal states during collaboration, particularly self-efficacy and trust. Yet, the dynamics of these states and their associations with users’ prompting strategies and authorship outcomes remain underexplored. We examined these dynamics through a study of 302 participants in LLM-assisted writing, capturing interaction logs and turn-by-turn self-efficacy and trust ratings. Our analysis showed that collaboration generally decreased users’ self-efficacy while increasing trust. Participants who lost self-efficacy were more likely to ask the LLM to edit their work directly, whereas those who recovered self-efficacy requested more review and feedback. Furthermore, participants with stable self-efficacy showed higher actual and perceived authorship of the final text. Based on these findings, we propose design implications for understanding and supporting authorship in human-LLM collaboration.

著者
Yeon Su Park
KAIST, Daejeon, Korea, Republic of
Nadia Azzahra Putri. Arvi
KAIST, Daejeon, Korea, Republic of
Seoyoung Kim
KAIST, Daejeon, Korea, Republic of
Juho Kim
KAIST, Daejeon, Korea, Republic of
動画
Exploring Women’s Perspectives on Learning and Trust in Automated Vehicles: A Socio-Ecological Lens
要旨

As automated vehicles (AVs) move toward mainstream adoption, understanding how users learn about and build trust in them is critical. Prior research shows that women hold safety concerns and report low trust and familiarity with AVs. While limited exposure is often cited as a cause, growing evidence indicates that women’s needs, preferences, and safety priorities remain insufficiently addressed in AV design and governance. We conducted ten dyadic and five individual semi-structured interviews with fifteen women, guided by feminist HCI principles. We then analysed findings through a socio-ecological framework to explore trust and learning. Our findings show that women's needs and expectations for AVs develop in conversation with gendered and caregiving responsibilities, and experiences of safety and vulnerability. Trust and learning co-evolve in this process as a dynamic association of forces influencing inclusive mobility. We contribute a feminist socio-ecological account of trust–learning dynamics, identifying design and policy interventions that support inclusive onboarding, institutional accountability, and community-based co-learning for equitable AV adoption.

著者
ALAA H A. ABUSAFIA
Queensland University of Technology, Brisbane, Australia
Ronald Schroeter
Queensland University of Technology (QUT), Brisbane, Australia
Alessandro Soro
Queensland University of Technology, Brisbane, Australia
How Much Trust is Enough? Towards Calibrating Trust in Technology
要旨

The role of trust within Human-Computer Interaction is being redefined. With the increasing omnipresence, autonomy, and opacity of technology, users often struggle to understand the capabilities and limitations of systems. In this article, we present the results of an empirical study designed to provide a practical, evidence-based interpretation of trust propensity assessment using the Human-Computer Trust Scale (HCTS). We outline the process used to develop a guideline for interpreting the instrument’s results and explain the rationale for our decisions, advocating for calibrating trust in technology within HCI. Our findings demonstrate that the HCTS is a promising tool for conducting an initial evaluation of propensity to trust, but that such an assessment requires reflection and interpretation that should be considered within the context of the interaction

著者
Gabriela Beltrão
Tallinn University, Tallinn, Estonia
Debora Conceição Firmino de Souza
Tallinn University, Tallinn, Harjumaa, Estonia
Sonia Sousa
Tallinn university, Tallinn, Estonia
David Lamas
Tallinn University, Tallinn, Estonia
Treading the Transparency Tightrope: A Taxonomy of Risks and Benefits of Foundation Model Data Transparency for Transparency Advocates
要旨

Data powering AI is often opaque. Researchers, NGOs, and law and policy leaders have called for greater transparency about how data is used for training, fine-tuning, and evaluation. While data transparency is often championed as crucial, what it concretely enables is largely implicit. Similarly, the concerns developers seem to have about transparency go unstated. This lack of clarity has led some researchers to critique transparency demands as disconnected from the actual benefits—or risks—to specific stakeholders. We analyze documentation from four stakeholder groups to create a taxonomy of the risks and benefits of dataset transparency. Data transparency is perceived as either a risk or a benefit given a stakeholder's position, rather than wholesale. We also propose data availability and data documentation as two lenses through which to consider transparency. We discuss how best to strategically promote situational data transparency that takes into account the relationship between stakeholder position, transparency modality, and benefits/risks.

著者
Morgan Klaus. Scheuerman
Sony AI, Broomfield, Colorado, United States
Wiebke Hutiri
Sony AI, Zurich, Switzerland
Aida Rahmattalabi
Sony AI, Los Angeles, California, United States
Victoria Matthews
Sony AI, New York, New York, United States
Alice Xiang
Sony AI, Seattle, Washington, United States
Jerone Andrews
Sony AI, London, United Kingdom
Active and Passive Decisions: How Ethical Choices Are Made (and Missed) in NLP Research
要旨

While AI ethics interventions often focus on how researchers should navigate consequential choices, they may overlook a prior question: when do researchers recognize they are making a decision at all? This qualitative study examines how academic NLP teams confront “decision moments” -- junctures where latent alternative paths could be considered. We propose a railyard problem analogy: where trolley problems presume a discrete choice between visible options, railyard problems concern whether alternative paths register as possibilities at all. Drawing on decision-tracing interviews across four NLP projects, we demonstrate how technical defaults, institutional structures, and tacit norms (infraethics) combine to organize research as a human-infrastructural process. Many consequential outcomes arise through "passive decisions", where alternatives exist but never become sufficiently visible, viable, or voiced (VVV) to warrant deliberation; "active decisions" only emerge when VVV conditions are met. Our analysis suggests ethics interventions should cultivate the collaborative conditions under which alternatives become recognizable.

著者
Kayla Uleah
Georgia Institute of Technology, Atlanta, Georgia, United States
Betsy DiSalvo
Georgia Institute of Technology, Atlanta, Georgia, United States
Amanda Meng
Georgia Institute of Technology, Atlanta, Georgia, United States