この勉強会は終了しました。ご参加ありがとうございました。
With AI on the boom, DeepFakes have emerged as a tool with a massive potential for abuse. The hyper-realistic imagery of these manipulated videos coupled with the expedited delivery models of social media platforms gives deception, propaganda, and disinformation an entirely new meaning. Hence, raising awareness about DeepFakes and how to accurately flag them has become imperative. However, given differences in human cognition and perception, this is not straightforward. In this paper, we perform an investigative user study and also analyze existing AI detection algorithms from the literature to demystify the unknowns that are at play behind the scenes when detecting DeepFakes. Based on our findings, we design a customized training program to improve detection and evaluate on a treatment group of low-literate population, which is most vulnerable to DeepFakes. Our results suggest that, while DeepFakes are becoming imperceptible, contextualized education and training can help raise awareness and improve detection.
Although the number of influencers is increasing and being an influencer is one of the most frequently mentioned career aspirations of young people, we still know very little about influencers’ motivations and actual practices from the HCI perspective. Driven by the emerging field of Human-Food Interaction and novel phenomena on social media such as Finstas, ASMR, Mukbang and live streaming, we would like to highlight the significance of food influencers as influential content creators and their social media practices. We have conducted a qualitative interview study and analyzed over 1,500 posts of food content creators on Instagram, focusing on practices of content creation, photography, staging, posting, and use of technology. Based on our findings, we have derived a process model that outlines the practices of this rather small, but influential user group. We contribute to the field of HCI by outlining the practices of food influencers as influential content creators within the social media sphere to open up design spaces for interaction researchers and practitioners.
Within the wider open science reform movement, HCI researchers are actively debating how to foster transparency in their own field. Publication venues play a crucial role in instituting open science practices, especially journals, whose procedures arguably lend themselves better to them than conferences. Yet we know little about how much HCI journals presently support open science practices. We identified the 51 most frequently published-in journals by recent CHI first authors and coded them according to the Transparency and Openness Promotion guidelines, a high-profile standard of evaluating editorial practices. Results indicate that journals in our sample currently do not set or specify clear openness and transparency standards. Out of a maximum of 29, the modal score was 0 (mean = 2.5, SD = 3.6, max = 15). We discuss potential reasons, the aptness of natural science-based guidelines for HCI, and next steps for the HCI community in furthering openness and transparency.
Electrodermal activity data is widely used in HCI to capture rich and unbiased signals. Results from related fields, however, have suggested several methodological issues that can arise when practices do not follow established standards. In this paper, we present a systematic methodological review of CHI papers involving the use of EDA data according to best practices from the field of psychophysiology, where standards are well-established and mature. We found severe issues in our sample at all stages of the research process. To ensure the validity of future research, we highlight pitfalls and offer directions for how to improve community standards.
A user’s ownership perception of virtual objects, such as cloud files, is generally uncertain. Is this valid for streaming platforms featuring accounts designed for sharing (DS)? We observe sharing practices within DS accounts of streaming platforms and identify their ownership characteristics and unexpected complications through two mixed-method studies. Casual and Cost-splitting are the two sharing practices identified. The owner is the sole payer for the account in the former, whereas profile holders split the cost in the latter. We distinguish two types of ownership in each practice—primary and dual. In primary ownership, the account owner has the power to allow others to use the account; in dual ownership, primary ownership appears in conjunction with joint ownership, notably displaying asymmetric ownership perceptions among users. Conflicts arise when the sharing agreements collapse. Therefore, we propose design recommendations that bridge ownership differences based on sharing practices of DS accounts.
Monitoring advertising around controversial issues is an important step in ensuring accountability and transparency of political processes.
To that end, we use the Facebook Ads Library to collect 2312 migration-related advertising campaigns in Italy over one year.
Our pro- and anti-immigration classifier (F1=0.85) reveals a partisan divide among the major Italian political parties, with anti-immigration ads accounting for nearly 15M impressions.
Although composing 47.6% of all migration-related ads, anti-immigration ones receive 65.2% of impressions.
We estimate that about two thirds of all captured campaigns use some kind of demographic targeting by location, gender, or age.
We find sharp divides by age and gender: for instance, anti-immigration ads from major parties are 17% more likely to be seen by a male user than a female.
Unlike pro-migration parties, we find that anti-immigration ones reach a similar demographic to their own voters.
However their audience change with topic: an ad from anti-immigration parties is 24% more likely to be seen by a male user when the ad speaks about migration, than if it does not.
Furthermore, the viewership of such campaigns tends to follow the volume of mainstream news around immigration, supporting the theory that political advertisers try to ``ride the wave'' of current news.
We conclude with policy implications for political communication: since the Facebook Ads Library does not allow to distinguish between advertisers intentions and algorithmic targeting, we argue that more details should be shared by platforms regarding the targeting configuration of socio-political campaigns.
Maintenance of industrial equipment is done by fitters, electricians and other maintainers. For safety and quality control, maintainers must follow procedures; historically these have been paper-based. Asset-owning organisations seek to transition maintainers to digital platforms. However, there are limited studies on the potential impact of digitisation on maintenance work and the maintainers that perform it. Our challenge is to identify interface design considerations that support the safe and reliable execution of work. We looked specifically at maintenance procedures and conducted semi-structured interviews with process-plant maintainers. Thematic analysis identified eight factors influencing maintainers' perceptions towards using digital technologies in their work. We map these factors to three categories, work identity, agency and community. These categories are consistent with concepts from the Job Characteristics Model (JCM). The contribution of this work is the relevance of job characteristics in guiding user interface design for maintainers, for which we make a number of recommendations.
Spreadsheet users routinely read, and misread, others' spreadsheets, but literature offers only a high-level understanding of users' comprehension behaviours. This limits our ability to support millions of users in spreadsheet comprehension activities. Therefore, we conducted a think-aloud study of 15 spreadsheet users who read others' spreadsheets as part of their work. With qualitative coding of participants' comprehension needs, strategies and difficulties at 20-second granularity, our study provides the most detailed understanding of spreadsheet comprehension to date.
Participants comprehending spreadsheets spent around 40% of their time seeking additional information needed to understand the spreadsheet. These information seeking episodes were tedious: around 50% of participants reported feeling overwhelmed. Moreover, participants often failed to obtain the necessary information and worked instead with guesses about the spreadsheet. Eventually, 12 out of 15 participants decided to go back to the spreadsheet's author for clarifications. Our findings have design implications for reading as well as writing spreadsheets.
A prominent approach to combating online misinformation is to debunk false content. Here we investigate downstream consequences of social corrections on users’ subsequent sharing of other content. Being corrected might make users more attentive to accuracy, thus improving their subsequent sharing. Alternatively, corrections might not improve subsequent sharing - or even backfire - by making users feel defensive, or by shifting their attention away from accuracy (e.g., towards various social factors). We identified N=2,000 users who shared false political news on Twitter, and replied to their false tweets with links to fact-checking websites. We find causal evidence that being corrected decreases the quality, and increases the partisan slant and language toxicity, of the users’ subsequent retweets (but has no significant effect on primary tweets). This suggests that being publicly corrected by another user shifts one’s attention away from accuracy - presenting an important challenge for social correction approaches.
In this paper, we investigate the human ability to distinguish political social bots from humans on Twitter. Following motivated reasoning theory from social and cognitive psychology, our central hypothesis is that especially those accounts which are opinion-incongruent are perceived as social bot accounts when the account is ambiguous about its nature. We also hypothesize that credibility ratings mediate this relationship. We asked N = 151 participants to evaluate 24 Twitter accounts and decide whether the accounts were humans or social bots. Findings support our motivated reasoning hypothesis for a sub-group of Twitter users (those who are more familiar with Twitter): Accounts that are opinion-incongruent are evaluated as relatively more bot-like than accounts that are opinion-congruent. Moreover, it does not matter whether the account is clearly social bot or human or ambiguous about its nature. This was mediated by perceived credibility in the sense that congruent profiles were evaluated to be more credible resulting in lower perceptions as bots.
New parents, defined as parents of children between the infant and preschooler stages, are increasingly turning to online media to exchange support and information to help with their life-changing transition. Understanding parents' discussion online is crucial to the design and development of technologies that can better support their media interaction. This work studies how new parents use online media using a large-scale parenting corpus. To do so, we first employed a card-sorting methodology to identify a set of parenting topics, with which we trained BERT classifiers to automatically identify the topics of Reddit posts. We then investigate at scale what parenting topics were talked about most by new parents, how topics change over the course of their participation, and how interactions with different topics affect members' engagement in the community. We conclude with implications of our research in designing future research and online parenting communities.
The amount of autonomy in software engineering tools is increasing as developers build increasingly complex systems. We study factors influencing software engineers’ trust in an autonomous tool situated in a high stakes workplace, because research in other contexts shows that too much or too little trust in autonomous tools can have negative consequences. We present the results of a ten week ethnographic case study of engineers collaborating with an autonomous tool to write control software at the National Aeronautics and Space Administration to support high stakes missions. We find that trust in an autonomous software engineering tool in this setting was influenced by four main factors: the tool’s transparency, usability, its social context, and the organization’s associated processes. Our observations lead us to frame trust as a quality the operator places in their collaboration with the automated system, and we outline implications of this framing and other results for researchers studying trust in autonomous systems, designers of software engineering tools, and organizations conducting high stakes work with these tools.
Vehicle manufacturers and government agencies are considering using vehicle-to-pedestrian (V2P) communication to improve pedestrian safety. However, there are unanswered questions about whether people will heed alerts and warnings presented through a smartphone. We conducted between-subject studies with younger and older adults where they physically crossed a virtual street. They received either permissive alerts (safe to cross), prohibitive warnings (not safe to cross), or no alerts or warnings (control). We found that both older and younger adults were highly likely to heed permissive alerts, even when this meant taking gaps between two vehicles that were smaller than they would typically take on their own. We also found that we could shift participants’ road-crossing behavior toward greater caution when they were only alerted to cross very large gaps between two vehicles. Participants stated that alerts and warnings were useful, but that prohibitive warnings were annoying. These findings give insights into V2P design and pedestrian behavior when smartphone assistance is provided.
There is a growing concern that e-commerce platforms are amplifying vaccine-misinformation. To investigate, we conduct two-sets of algorithmic audits for vaccine misinformation on the search and recommendation algorithms of Amazon---world's leading e-retailer. First, we systematically audit search-results belonging to vaccine-related search-queries without logging into the platform---unpersonalized audits. We find 10.47% of search-results promote misinformative health products. We also observe ranking-bias, with Amazon ranking misinformative search-results higher than debunking search-results. Next, we analyze the effects of personalization due to account-history, where history is built progressively by performing various real-world user-actions, such as clicking a product. We find evidence of filter-bubble effect in Amazon's recommendations; accounts performing actions on misinformative products are presented with more misinformation compared to accounts performing actions on neutral and debunking products. Interestingly, once user clicks on a misinformative product, homepage recommendations become more contaminated compared to when user shows an intention to buy that product.