Precision Medicine (PM) transforms the traditional "one-drug-fits-all" paradigm by customising treatments based on individual characteristics, and is an emerging topic for HCI research on digital health. A key element of PM, the Polygenic Risk Score (PRS), uses genetic data to predict an individual's disease risk. Despite its potential, PRS faces barriers to adoption, such as data inclusivity, psychological impact, and public trust. We conducted a mixed-methods study to explore how people perceive PRS, formed of surveys (n=254) and interviews (n=11) with UK-based participants. The interviews were supplemented by interactive storyboards with the ContraVision technique to provoke deeper reflection and discussion. We identified ten key barriers and five themes to PRS adoption and proposed design implications for a responsible PRS framework. To address the complexities of PRS and enhance broader PM practices, we introduce the term Human-Precision Medicine Interaction (HPMI), which integrates, adapts, and extends HCI approaches to better meet these challenges.
In high-stakes domains, deep analytical processing of online videos is essential for decision-making and knowledge acquisition. However, individuals may lack sufficient cognitive resources and triggers to engage in such processes. To address this, we introduce DeepThinkingMap, a collaborative video mapping system with affordances designed to leverage peers' thoughts and comments to promote reflective and critical thinking. Thee design supports collaborative mapping of video concepts and supports open deliberations of personal thoughts over concepts as "thinking nudges" to foster deeper thinking for themselves and others. Through two experimental studies, we investigated the potential of deeper thinking by accessing peers' thoughts in standalone and collaborative information work respectively. Results illustrated that accessing peers' comments enhances personal engagement in reflective and critical thinking, and reinforces their confidence in their correct beliefs toward the video topics. This work contributes to understanding the socio-technical-cognitive mechanism of thinking while accessing peer comments, and presents design implications for information and knowledge work.
Emotion artificial intelligence (AI) is deployed in many high-impact areas. However, we know little about people's general attitudes towards and comfort with it across application domains. We conducted a survey with a U.S. representative sample, oversampling for marginalized groups who are more likely to experience emotion AI harms (i.e., people of color, disabled people, minoritized genders) (n=599). We find: 1) although comfort was distinct across 11 contexts, even the most favorable context (healthcare) yielded low comfort levels; 2) participants were significantly more comfortable with inferences of happiness and surprise compared to other emotions; 3) individuals with disabilities and minoritized genders were significantly less comfortable than others across a variety of contexts; and 4) perceived accuracy explained a large proportion of the variance in comfort levels across contexts. We argue that attending to identity is key in examining emotion AI's societal and ethical impacts, and discuss implications for emotion AI deployment and regulation.
Machine learning (ML) is increasingly used in high-stakes settings, yet multiplicity – the existence of multiple good models – means that some predictions are essentially arbitrary. ML researchers and philosophers posit that multiplicity poses a fairness risk, but no studies have investigated whether stakeholders agree. In this work, we conduct a survey to see how multiplicity impacts lay stakeholders’ – i.e., decision subjects’ – perceptions of ML fairness, and which approaches to address multiplicity they prefer. We investigate how these perceptions are modulated by task characteristics (e.g., stakes and uncertainty). Survey respondents think that multiplicity threatens the fairness of model outcomes, but not the appropriateness of using the model, even though existing work suggests the opposite. Participants are strongly against resolving multiplicity by using a single model (effectively ignoring multiplicity) or by randomizing the outcomes. Our results indicate that model developers should be intentional about dealing with multiplicity in order to maintain fairness.
Increased AI use in software development raises concerns about AI-generated code security. We investigated the impact of security prompts, insecure AI suggestion warnings, and the use of password storage guidelines (OWASP, NIST) on the security behavior of software developers when presented with insecure AI assistance. In an online lab setting, we conducted a study with 76 freelance developers who completed a password storage task divided into four conditions. Three conditions included a manipulated ChatGPT-like AI assistant, suggesting an insecure MD5 implementation. We found a high level of trust in AI-generated code, even when insecure suggestions were presented. While security prompts, AI warnings, and guidelines improved security awareness, 32% of those notified about insecure AI recommendations still accepted weak implementation suggestions, mistakenly considering it secure and often expressing confidence in their choice. Based on our results, we discuss security implications and provide recommendations for future research.
Deepfake—manipulating individuals' facial features and voices with AI—has introduced new challenges to online scams, with older adults being particularly vulnerable. However, existing safeguarding efforts often portray them as passive recipients, overlooking their perspectives on understanding deepfake-enabled scams and their expectations for protective interventions. To address this gap, we conducted a participatory design workshop with 10 older adults, where participants analyzed simulated deepfake scam videos and critiqued provocative safeguarding designs. Their insights revealed key factors contributing to their vulnerability and how they perceive protective measures. The findings underscored the importance of respecting older adults' autonomy and their role in decision-making, as well as the crucial role of enhanced digital literacy in self-protection. Moreover, while tailored safeguarding measures are essential, a broader societal approach focusing on shared responsibility is also needed. These design implications, viewed through the lens of older adults, contribute to more tailored safeguarding against Deepfake scams.
Some people suggest that deliberately watching the camera during video calls can simulate eye contact and help build trust. In this study, we investigated the effects of simulated eye contact in video calls and job interviews through an experimental study and a survey. Study 1 involved participants in a mock interview as an interviewer, where a confederate interviewee simulated eye contact half the time. The gaze patterns of the participants were tracked to understand the effects. In Study 2, we conducted an online survey to confirm the findings of Study 1 on a larger scale by asking those with experience interviewing to evaluate interviewees based on interview videos, half of which simulated eye contact. The results of both studies indicate that simulated eye contact had little impact on their evaluation compared to common belief. We discuss how the results motivate future work and how computational approaches to correcting eye gaze can be deceptive.