この勉強会は終了しました。ご参加ありがとうございました。
This paper explores how blind and sighted individuals perceive real and spoofed audio, highlighting differences and similarities between the groups. Through two studies, we find that both groups focus on specific human traits in audio--such as accents, vocal inflections, breathing patterns, and emotions--to assess audio authenticity. We further reveal that humans, irrespective of visual ability, can still outperform current state-of-the-art machine learning models in discerning audio authenticity; however, the task proves psychologically demanding. Moreover, detection accuracy scores between blind and sighted individuals are comparable, but each group exhibits unique strengths: the sighted group excels at detecting deepfake-generated audio, while the blind group excels at detecting text-to-speech (TTS) generated audio. These findings not only deepen our understanding of machine-manipulated and neural-renderer audio but also have implications for developing countermeasures, such as perceptible watermarks and human-AI collaboration strategies for spoofing detection.
While human factors in fraud have been studied by the HCI and security communities, most research has been directed to understanding either the victims' perspectives or prevention strategies, and not on fraudsters, their motivations and operation techniques.
Additionally, the focus has been on a narrow set of problems: phishing, spam and bullying. In this work, we seek to understand review fraud on e-commerce platforms through an HCI lens. Through surveys with real fraudsters (N=36 agents and N=38 reviewers), we uncover sophisticated recruitment, execution, and reporting mechanisms fraudsters use to scale their operation while resisting takedown attempts, including the use of AI tools like ChatGPT. We find that countermeasures that crack down on communication channels through which these services operate are effective in combating incentivized reviews. This research sheds light on the complex landscape of incentivized reviews, providing insights into the mechanics of underground services and their resilience to removal efforts.
As peer-to-peer (P2P) marketplaces have grown rapidly, concerns related to trust, privacy, and safety (TPS) have also increased. While previous studies have explored these aspects in various P2P marketplaces, there has been limited research on Facebook Marketplace (FM), which is distinguished by dramatic growth and intricate entanglement with the Facebook social networking site (SNS). To address this knowledge gap, we conducted interviews with 42 FM users in the US and Canada, investigating TPS factors associated with trading decisions. We identified four categories of factors: pre-existing concerns, signals, interactions, and perceived benefits. We uncover the challenges arising from the interplay of these factors, offer design recommendations for SNS–based marketplaces like FM, and suggest directions for future research. Our study advances the understanding of decision-making processes in SNS–based marketplaces, informs future design improvements for such platforms, and ultimately contributes to a better user experience related to trust, privacy, and safety.
In the era of digital communication, misinformation on social media threatens the foundational trust in these platforms. While myriad measures have been implemented to counteract misinformation, the complex relationship between these interventions and the multifaceted dynamics of trust and distrust on social media remains underexplored. To bridge this gap, we surveyed 1,769 participants in the U.S. to gauge their trust and distrust in social media and examine their experiences with anti-misinformation features. Our research demonstrates how trust and distrust in social media are not simply two ends of a spectrum; but can also co-exist, enriching the theoretical understanding of these constructs. Furthermore, participants exhibited varying patterns of trust and distrust across demographic characteristics and platforms. Our results also show that current misinformation interventions helped heighten awareness of misinformation and bolstered trust in social media, but did not alleviate underlying distrust. We discuss theoretical and practical implications for future research.
The status-quo of misinformation moderation is a central authority, usually social platforms, deciding what content constitutes misinformation and how it should be handled. However, to preserve users’ autonomy, researchers have explored democratized misinformation moderation. One proposition is to enable users to assess content accuracy and specify whose assessments they trust. We explore how these affordances can be provided on the web, without cooperation from the platforms where users consume content.
We present a browser extension that empowers users to assess the accuracy of any content on the web and shows the user assessments from their trusted sources in-situ. Through a two-week user study, we report on how users perceive such a tool, the kind of content users want to assess, and the rationales they use in their assessments. We identify implications for designing tools that enable users to moderate content for themselves with the help of those they trust.