AI Governance and Accountability

会議の名前
CHI 2026
Funding AI for Good: A Call for Meaningful Engagement
要旨

Artificial Intelligence for Social Good (AI4SG) is a growing area that explores AI's potential to address social issues, such as public health. Yet prior work has shown limited evidence of its tangible benefits for intended communities, and projects frequently face real-world deployment and sustainability challenges. While existing HCI literature on AI4SG initiatives primarily focuses on the mechanisms of funded projects and their outcomes, much less attention has been given to the upstream funding agendas that influence project approaches. In this work, we conducted a reflexive thematic analysis of 35 funding documents, representing about $410 million USD in total investments. We uncovered a spectrum of conceptual framings of AI4SG and the approaches that funding rhetoric promoted: from biasing towards technology capacities (more techno-centric) to emphasizing contextual understanding of the social problems at hand alongside technology capacities (more balanced). Drawing on our findings on how funding documents construct AI4SG, we offer recommendations for funders to embed more balanced approaches in future funding call designs. We further discuss implications for how the HCI community can positively shape AI4SG funding design processes.

著者
Hongjin Lin
Harvard University, Allston, Massachusetts, United States
Anna Kawakami
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Catherine D'Ignazio
MIT, Cambridge, Massachusetts, United States
Kenneth Holstein
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Krzysztof Z.. Gajos
Harvard University, Allston, Massachusetts, United States
On the Open Access of SIGCHI
要旨

ACM's transition to full open access (OA) may fundamentally reshape publication practices within the SIGCHI community. However, the current state of OA adoption and the potential impact of this shift remain underexplored, which limits both scholarly understanding and informed actions. To address this gap, we conduct a large-scale bibliometric analysis of SIGCHI publications from 2001 to 2024. We document the prevalence of OA in our community and author characteristics associated with OA uptake, and assess the projected impact of the transition regarding financial cost and scholarly visibility. Our results indicate that our community is well-positioned for this shift, with fewer than 10.1% of the papers expected to incur additional OA fees. This move to OA is likely to boost citations, especially cross-community citations, but risks further marginalizing under-resourced authors. We discuss the broader implications of these findings for fostering a sustainable future for our community.

著者
Zhilong Chen
Tsinghua University, Beijing, China
Yong Li
Tsinghua University, Beijing, China
VisGuardian: A Lightweight Group-based Visual Privacy Control Technique For Smart Glasses in Home Environments
要旨

Always-on sensing of AI applications on AR glasses makes traditional permission techniques inefficient for context-dependent private visual data within home environments. Home presents a challenging privacy context due to massive sensitive objects and the intimate nature of daily routines. We propose VisGuardian, a fine-grained content-based visual permission technique for AR glasses. VisGuardian features a group-based control mechanism that enables users to efficiently manage permissions for multiple private objects. VisGuardian detects objects using YOLO and adopts a pre-classified schema to group them. By selecting a single object, users can obscure groups of related objects based on criteria including privacy sensitivity, object category, or spatial proximity. A technical evaluation shows VisGuardian achieves mAP50 of 0.6704 with only 14.0 ms latency and a 1.7% increase in battery consumption per hour. Furthermore, a user study (N=24) comparing VisGuardian to slider-based and object-based baselines found it to be significantly faster for setting permissions and was preferred by users for its efficiency, effectiveness, and ease of use.

著者
Shuning Zhang
Tsinghua University, Beijing, China
Qucheng Zang
Institute of Computational Arts, Hangzhou, China, China
Yongquan 'Owen' Hu
National University of Singapore, Singapore, Singapore
Jiachen Du
The Future Laboratory, Tsinghua University, Beijing, China
Xueyang Wang
Tsinghua University, Beijing, China
Yan Kong
CS, Beijing, China, China
Xinyi Fu
Tsinghua University, Beijing, China
Suranga Nanayakkara
School of Computing, National University of Singapore, Singapore, Singapore
Xin Yi
Tsinghua University, Beijing, China
Hewu Li
Tsinghua University, Beijing, China
"It just requires so much more creativity": Barriers and Workarounds to Gathering Information for AI Contestation
要旨

Gathering information about AI systems is essential for contesting their use; it forms the basis of arguments about how AI is causing harm. Information thus plays a central role for advocates like lawyers, journalists, and auditors contesting harmful AI systems. However, there is little systematic understanding of how these actors, many of whom are newly encountering AI in their advocacy work, access and use information effectively in this process. Understanding this information work can offer valuable insights for supporting effective contestation of harmful AI systems. To better understand information work in AI contestation, we interviewed 18 advocates in the United States (US) who have contested the use of AI in high-stakes domains, such as public benefits and housing. We characterize advocates' strategies for accessing information that is useful for contestation, including a range of creative yet resource-intensive and risky workarounds that they use to overcome opacity. We discuss implications of our findings for the effectiveness of popular transparency policy strategies in the US and offer additional ways to support the social fabric that makes advocates' information work effective.

著者
Sohini Upadhyay
Harvard University, Cambridge, Massachusetts, United States
Dasha Pruss
University of Illinois Chicago, Chicago, Illinois, United States
Alicia DeVrio
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Krzysztof Z.. Gajos
Harvard University, Allston, Massachusetts, United States
Naveena Karusala
Georgia Institute of Technology, Atlanta, Georgia, United States
PrivWeb: Unobtrusive and Content-aware Privacy Protection For Web Agents
要旨

While web agents gained popularity by automating web interactions, their requirement for interface access introduces privacy risks that are understudied, particularly from users' perspective. Through a formative study (N=15), we found that users frequently misunderstand agent data practices, and desire unobtrusive, transparent data management. To achieve this, we developed PrivWeb, a trusted add-on on web agents that utilizes a localized LLM to anonymize private information on interfaces based on user preferences. It employs a tiered delegation to balance automation and intrusiveness, using ambient notifications for low-sensitivity data and enforces a mandatory pause for high-sensitivity data. The user study (N=14) across travel, information retrieval, shopping, and entertainment tasks showed that PrivWeb enhances perceived privacy protection and trust compared to transparency-only baselines, without increasing cognitive load. Crucially, we identified user delegation strategies: they prefer to manually execute sensitive steps for high-sensitivity data, while granting agent access to low-sensitivity data.

著者
Shuning Zhang
Tsinghua University, Beijing, China
Yutong Jiang
Tongji University, Shanghai, China
Rongjun Ma
Aalto University , Espoo, Finland
Yuting Yang
University of Michigan, Ann Arbor, Michigan, United States
Mingyao Xu
University of Washington, Seattle, Washington, United States
Zhixin Huang
Shantou University, Shantou, China
Xin Yi
Tsinghua University, Beijing, China
Hewu Li
Tsinghua University, Beijing, China
"Computer Says No": Disabled Welfare Experiences and Envisioned Futures Under AI Governance
要旨

Progressive digitisation and adoption of artificial intelligence (AI) are reshaping welfare services in ways that risk compounding inequalities for disabled people. Globally, many governments present these reforms as beneficial--streamlining processes, reducing costs and eliminating delays. Yet digitisation and automation of welfare decision-making can deepen exclusion and erode human accountability. In response, this paper foregrounds the lived experiences of people with the communication disability aphasia in navigating digitised welfare and their perspectives on AI-automated futures. We report findings from a four-stage participatory design study involving eight workshops with 42 recruited co-designers. Reflexive thematic analysis identified five challenges: the cost of performing disability, geographies of inequity, navigating digital bureaucracy, the accessibility paradox and hostile design. Co-designers voiced concerns about AI-automation but envisioned inclusive future alternatives: AI dialogues that are patient, multimodal and supportive; welfare systems that are compassionate, transparent and retain human recourse; and infrastructures that are open, publicly governed and truthful.

著者
Humphrey Curtis
King's College London, London, United Kingdom
Adam D G. Jenkins
King's College London, London, United Kingdom
Alistair Gentry
Independent, London, United Kingdom
Sioban Zacharek
Aphasia Re-Connect, London, United Kingdom
Sally McVicker
City St George's, University of London, London, United Kingdom
Timothy Neate
King's College London , London, United Kingdom
Filip Bircanin
King's College London , London, United Kingdom
“It’s Just a Wild, Wild West”: Harnessing Public Procurement as an AI Governance Mechanism
要旨

Public sector AI has the potential to harm citizens, with risks increasing as its use expands. Recent work positions public procurement as a way to shape public sector AI in line with public interests, using the state’s purchasing power to influence which AI systems are procured and under what conditions. This paper examines how this potential can be realised in practice by drawing on semi-structured interviews with UK and EU buyers, providers, and procurement experts. Our findings result in six promising procurement practices that enable the public sector to shape AI in line with public interests, alongside concrete mechanisms to support their uptake. Further, we find that AI-specific procurement approaches remain immature and systems often enter through informal channels with less scrutiny. We provide directions for both research and practice on how public procurement can be used as a governance mechanism for better aligning AI with public interests.

受賞
Honorable Mention
著者
Anna Ida Hudig
University of Cambridge, Cambridge, United Kingdom
Emma Marlene. Kallina
UA Ruhr University Duisburg-Essen, Duisburg, Germany
Jatinder Singh
University Duisburg-Essen, Duisburg, Germany