High-Stake Situations

会議の名前
CHI 2025
Human-Precision Medicine Interaction: Public Perceptions of Polygenic Risk Score for Genetic Health Prediction
要旨

Precision Medicine (PM) transforms the traditional "one-drug-fits-all" paradigm by customising treatments based on individual characteristics, and is an emerging topic for HCI research on digital health. A key element of PM, the Polygenic Risk Score (PRS), uses genetic data to predict an individual's disease risk. Despite its potential, PRS faces barriers to adoption, such as data inclusivity, psychological impact, and public trust. We conducted a mixed-methods study to explore how people perceive PRS, formed of surveys (n=254) and interviews (n=11) with UK-based participants. The interviews were supplemented by interactive storyboards with the ContraVision technique to provoke deeper reflection and discussion. We identified ten key barriers and five themes to PRS adoption and proposed design implications for a responsible PRS framework. To address the complexities of PRS and enhance broader PM practices, we introduce the term Human-Precision Medicine Interaction (HPMI), which integrates, adapts, and extends HCI approaches to better meet these challenges.

受賞
Honorable Mention
著者
Yuhao Sun
University of Edinburgh, Edinburgh, United Kingdom
Albert Tenesa
University of Edinburgh, Edinburgh, United Kingdom
John Vines
University of Edinburgh, Edinburgh, United Kingdom
DOI

10.1145/3706598.3713567

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713567

動画
Signals Beyond Text: Understanding How Accessing Peer Concept Mapping and Commenting Augments Reflective Mind for High-Stake Videos
要旨

In high-stakes domains, deep analytical processing of online videos is essential for decision-making and knowledge acquisition. However, individuals may lack sufficient cognitive resources and triggers to engage in such processes. To address this, we introduce DeepThinkingMap, a collaborative video mapping system with affordances designed to leverage peers' thoughts and comments to promote reflective and critical thinking. Thee design supports collaborative mapping of video concepts and supports open deliberations of personal thoughts over concepts as "thinking nudges" to foster deeper thinking for themselves and others. Through two experimental studies, we investigated the potential of deeper thinking by accessing peers' thoughts in standalone and collaborative information work respectively. Results illustrated that accessing peers' comments enhances personal engagement in reflective and critical thinking, and reinforces their confidence in their correct beliefs toward the video topics. This work contributes to understanding the socio-technical-cognitive mechanism of thinking while accessing peer comments, and presents design implications for information and knowledge work.

著者
Jingxian Liao
UC Davis, Davis, California, United States
Fu-Yin Cherng
National Chung Cheng University, Chiayi, Taiwan
Mrinalini Singh
University of California, Davis, Davis, California, United States
Hao-Chuan Wang
UC Davis, Davis, California, United States
DOI

10.1145/3706598.3713426

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713426

動画
Public Perceptions About Emotion AI Use Across Contexts in the United States
要旨

Emotion artificial intelligence (AI) is deployed in many high-impact areas. However, we know little about people's general attitudes towards and comfort with it across application domains. We conducted a survey with a U.S. representative sample, oversampling for marginalized groups who are more likely to experience emotion AI harms (i.e., people of color, disabled people, minoritized genders) (n=599). We find: 1) although comfort was distinct across 11 contexts, even the most favorable context (healthcare) yielded low comfort levels; 2) participants were significantly more comfortable with inferences of happiness and surprise compared to other emotions; 3) individuals with disabilities and minoritized genders were significantly less comfortable than others across a variety of contexts; and 4) perceived accuracy explained a large proportion of the variance in comfort levels across contexts. We argue that attending to identity is key in examining emotion AI's societal and ethical impacts, and discuss implications for emotion AI deployment and regulation.

著者
Nazanin Andalibi
University of Michigan, Ann Arbor, Michigan, United States
Alexis Shore. Ingber
University of Michigan, Ann Arbor, Michigan, United States
DOI

10.1145/3706598.3713501

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713501

動画
Perceptions of the Fairness Impacts of Multiplicity in Machine Learning
要旨

Machine learning (ML) is increasingly used in high-stakes settings, yet multiplicity – the existence of multiple good models – means that some predictions are essentially arbitrary. ML researchers and philosophers posit that multiplicity poses a fairness risk, but no studies have investigated whether stakeholders agree. In this work, we conduct a survey to see how multiplicity impacts lay stakeholders’ – i.e., decision subjects’ – perceptions of ML fairness, and which approaches to address multiplicity they prefer. We investigate how these perceptions are modulated by task characteristics (e.g., stakes and uncertainty). Survey respondents think that multiplicity threatens the fairness of model outcomes, but not the appropriateness of using the model, even though existing work suggests the opposite. Participants are strongly against resolving multiplicity by using a single model (effectively ignoring multiplicity) or by randomizing the outcomes. Our results indicate that model developers should be intentional about dealing with multiplicity in order to maintain fairness.

著者
Anna P.. Meyer
University of Wisconsin - Madison, Madison, Wisconsin, United States
Yea-Seul Kim
Apple, Boulder, Colorado, United States
Loris D'Antoni
University of California - San Diego, San Diego, California, United States
Aws Albarghouthi
University of Wisconsin-Madison, Madison, Wisconsin, United States
DOI

10.1145/3706598.3713524

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713524

動画
Exploring the Impact of Intervention Methods on Developers’ Security Behavior in a Manipulated ChatGPT Study
要旨

Increased AI use in software development raises concerns about AI-generated code security. We investigated the impact of security prompts, insecure AI suggestion warnings, and the use of password storage guidelines (OWASP, NIST) on the security behavior of software developers when presented with insecure AI assistance. In an online lab setting, we conducted a study with 76 freelance developers who completed a password storage task divided into four conditions. Three conditions included a manipulated ChatGPT-like AI assistant, suggesting an insecure MD5 implementation. We found a high level of trust in AI-generated code, even when insecure suggestions were presented. While security prompts, AI warnings, and guidelines improved security awareness, 32% of those notified about insecure AI recommendations still accepted weak implementation suggestions, mistakenly considering it secure and often expressing confidence in their choice. Based on our results, we discuss security implications and provide recommendations for future research.

受賞
Honorable Mention
著者
Raphael Serafini
Ruhr University Bochum, Bochum, Germany
Asli Yardim
Ruhr University Bochum, Bochum, Germany
Alena Naiakshina
Ruhr University Bochum, Bochum, Germany
DOI

10.1145/3706598.3713989

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713989

動画
Hear Us, then Protect Us: Navigating Deepfake Scams and Safeguard Interventions with Older Adults through Participatory Design
要旨

Deepfake—manipulating individuals' facial features and voices with AI—has introduced new challenges to online scams, with older adults being particularly vulnerable. However, existing safeguarding efforts often portray them as passive recipients, overlooking their perspectives on understanding deepfake-enabled scams and their expectations for protective interventions. To address this gap, we conducted a participatory design workshop with 10 older adults, where participants analyzed simulated deepfake scam videos and critiqued provocative safeguarding designs. Their insights revealed key factors contributing to their vulnerability and how they perceive protective measures. The findings underscored the importance of respecting older adults' autonomy and their role in decision-making, as well as the crucial role of enhanced digital literacy in self-protection. Moreover, while tailored safeguarding measures are essential, a broader societal approach focusing on shared responsibility is also needed. These design implications, viewed through the lens of older adults, contribute to more tailored safeguarding against Deepfake scams.

著者
Yuxiang Zhai
Tsinghua University, Beijing, China
Xiao XUE
Tsinghua University, Beijing, China
Zekai Guo
Tsinghua University, Beijing, China
Tongtong Jin
Tsinghua University, Beijing, China
Yuting Diao
Tsinghua University, Beijing, China
Jihong Jeung
Tsinghua University, Beijing, China
DOI

10.1145/3706598.3714423

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714423

動画
Investigating the Effects of Simulated Eye Contact in Video Call Interviews
要旨

Some people suggest that deliberately watching the camera during video calls can simulate eye contact and help build trust. In this study, we investigated the effects of simulated eye contact in video calls and job interviews through an experimental study and a survey. Study 1 involved participants in a mock interview as an interviewer, where a confederate interviewee simulated eye contact half the time. The gaze patterns of the participants were tracked to understand the effects. In Study 2, we conducted an online survey to confirm the findings of Study 1 on a larger scale by asking those with experience interviewing to evaluate interviewees based on interview videos, half of which simulated eye contact. The results of both studies indicate that simulated eye contact had little impact on their evaluation compared to common belief. We discuss how the results motivate future work and how computational approaches to correcting eye gaze can be deceptive.

著者
Andrew Jelson
Virginia Tech, Blacksburg, Virginia, United States
Md Tahsin Tausif
Virginia Tech, Blacksburg, Virginia, United States
Sol Ie. Lim
Virginia Tech, Blacksburg, Virginia, United States
Soumya Khanna
Virginia Tech, Blacksburg, Virginia, United States
Sang Won Lee
Virginia Tech, Blacksburg, Virginia, United States
DOI

10.1145/3706598.3713282

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713282

動画