Perception of Systems

会議の名前
CHI 2025
Understanding the Effects of AI-based Credibility Indicators When People Are Influenced By Both Peers and Experts
要旨

In an era marked by rampant online misinformation, artificial intelligence (AI) technologies have emerged as tools to combat this issue. This paper examines the effects of AI-based credibility indicators in people’s online information processing under the social influence from both peers and "experts''. Via three pre-registered, randomized experiments, we confirm the effectiveness of accurate AI-based credibility indicators to enhance people's capability in judging information veracity and reduce their propensity to share false information, even under the influence from both laypeople peers and experts. Notably, these effects remain consistent regardless of whether experts' expertise is verified, with particularly significant impacts when AI predictions disagree with experts. However, the competence of AI moderates the effects, as incorrect predictions can mislead people. Furthermore, exploratory analyses suggest that under our experimental settings, the impact of the AI-based credibility indicator is larger than that of the expert's. Additionally, AI's influence on people is partially mediated through peer influence, although people automatically discount the opinions of their laypeople peers when seeing an agreement between AI and peers' opinions. We conclude by discussing the implications of utilizing AI to combat misinformation.

著者
Zhuoran Lu
Purdue University, West Lafayette, Indiana, United States
Patrick Li
Purdue University, West Lafayette, Indiana, United States
Weilong Wang
Purdue University, West Lafayette, Indiana, United States
Ming Yin
Purdue University, West Lafayette, Indiana, United States
DOI

10.1145/3706598.3713871

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713871

動画
Labeling Synthetic Content: User Perceptions of Label Designs for AI-Generated Content on Social Media
要旨

In this research, we explored the efficacy of various warning label designs for AI-generated content on social media platforms---e.g., \textit{deepfakes}. We devised and assessed ten distinct label design samples that varied across the dimensions of sentiment, color/iconography, positioning, and level of detail. Our experimental study involved 911 participants randomly assigned to these ten label designs and a control group evaluating social media content. We explored their perceptions relating to 1) Belief in the content being AI-generated, 2) Trust in the labels and 3) Social Media engagement perceptions of the content. The results demonstrate that the presence of labels had a significant effect on the user's belief that the content is AI-generated, deepfake, or edited by AI. However their trust in the label significantly varied based on the label design. Notably, having labels did not significantly change their engagement behaviors, such as 'like', comment, and sharing. However, there were significant differences in engagement based on content type: political and entertainment. This investigation contributes to the field of human-computer interaction by defining a design space for label implementation and providing empirical support for the strategic use of labels to mitigate the risks associated with synthetically generated media.

著者
Dilrukshi Gamage
University of Colombo School of Computing , Colombo, Sri Lanka
Dilki Sewwandi
University of Colombo School of Computing, Colombo, Sri Lanka
Min Zhang
The Open University, Milton Keynes, Buckinghamshire, United Kingdom
Arosha K. Bandara
The Open University, Milton Keynes, United Kingdom
DOI

10.1145/3706598.3713171

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713171

動画
"Impressively Scary:" Exploring User Perceptions and Reactions to Unraveling Machine Learning Models in Social Media Applications
要旨

Machine learning models deployed locally on social media applications are used for features, such as face filters which read faces in-real time, and they expose sensitive attributes to the apps. However, the deployment of machine learning models, e.g., when, where, and how they are used, in social media applications is opaque to users. We aim to address this inconsistency and investigate how social media user perceptions and behaviors change once exposed to these models. We conducted user studies (N=21) and found that participants were unaware to both what the models output and when the models were used in Instagram and TikTok, two major social media platforms. In response to being exposed to the models' functionality, we observed long term behavior changes in 8 participants. Our analysis uncovers the challenges and opportunities in providing transparency for machine learning models that interact with local user data.

著者
Jack West
University of Wisconsin -- Madison, Madison, Wisconsin, United States
Bengisu Cagiltay
University of Wisconsin - Madison, Madison, Wisconsin, United States
Shirley Zhang
University of Wisconsin, Madison, Madison, Wisconsin, United States
Jingjie Li
University of Edinburgh, Edinburgh, United Kingdom
Kassem Fawaz
University of Wisconsin-Madison, Madison, Wisconsin, United States
Suman Banerjee
UW Madison, Madison, Wisconsin, United States
DOI

10.1145/3706598.3713256

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713256

動画
Understanding End-User Perception of Transfer Risks in Smart Contracts
要旨

Blockchain smart contracts are increasingly used in critical use cases (e.g., financial transactions). Thus, it is pertinent to ensure that their end-users understand risks in attempting token transfers. Addressing this, we investigate end-user comprehension of five transfer risks (e.g. the end-user being blacklisted) in the most popular Ethereum contract, USD Tether (USDT), and their prevalence in other top ERC-20 contracts. First, we conducted a user study investigating end-user comprehension of transfer risks in USDT with 110 participants. Second, we performed source code analysis of the next top (78) ERC-20 smart contracts to identify the prevalence of these risks. Study results show that the majority of end-users do not comprehend some real risks, and confuse real and fictitious risks. This holds regardless of participants’ self-rated programming and Web3 proficiency. Source code analysis demonstrates that examined risks are prevalent in up to 19.2% of the top ERC-20 contracts.

著者
Yustynn Panicker
Singapore University of Technology and Design, Singapore, Singapore
Ezekiel Soremekun
Singapore University of Technology and Design, Singapore, Singapore
Sudipta Chattopadhyay
Singapore University of Technology and Design, Singapore, Singapore
Sumei Sun
Institute for Infocomm Research, A*STAR, Singapore, Singapore
DOI

10.1145/3706598.3713887

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713887

動画
Towards AI Accountability Infrastructure: Gaps and Opportunities in AI Audit Tooling
要旨

Audits are critical mechanisms for identifying the risks and limitations of deployed artificial intelligence (AI) systems. However, the effective execution of AI audits remains incredibly difficult, and practitioners often need to make use of various tools to support their efforts. Drawing on interviews with 35 AI audit practitioners and a landscape analysis of 435 tools, we compare the current ecosystem of AI audit tooling to practitioner needs. While many tools are designed to help set standards and evaluate AI systems, they often fall short in supporting accountability. We outline challenges practitioners faced in their efforts to use AI audit tools and highlight areas for future tool development beyond evaluation—from harms discovery to advocacy. We conclude that the available resources do not currently support the full scope of AI audit practitioners' needs and recommend that the field move beyond tools for just evaluation and towards more comprehensive infrastructure for AI accountability.

著者
Victor Ojewale
Brown University , Providence , Rhode Island, United States
Ryan Steed
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Briana Vecchione
Data & Society Research Institute, New York, New York, United States
Abeba Birhane
Trinity College Dublin, Dublin, Ireland
Inioluwa Deborah Raji
University of California, Berkeley, Berkeley, California, United States
DOI

10.1145/3706598.3713301

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713301

動画