Beyond Precision: Understanding the Impact of Algorithmic Accuracy and Transparency on User Perceptions in Keyword-Driven Contextual Advertising
説明

Algorithms frequently manage online advertising markets, aligning advertisements with article topics. Our work investigates how users perceive the relevance of ads to articles when ads are placed using different keyword extraction algorithms, including Large Language Models (LLMs), and how transparency about the placement procedure influences these perceptions and behavioral intentions. We conducted an online user experiment (N = 498) where ads are matched with news articles using the keyword extraction methods TF-IDF, KeyBERT, and DeepSeek. Results indicate that lightweight methods can match advanced LLMs in delivering high user-perceived ad-article relevance, which in turn fosters click and purchase intentions. However, providing explanations for the ad-article placements by displaying extracted keywords reduces ad interest and thereby weakens behavioral intentions, while simultaneously increasing perceived relevance and moderating algorithm effects. These findings highlight the complex impact of transparency-increasing explanations and suggest that algorithmic precision metrics must be complemented by user perception and intention measures.

日本語まとめ
読み込み中…
読み込み中…
Why Don't People Follow Robot Leaders? Understanding the Effects of Power Legitimacy on Compliance with Agents
説明

Artificially Intelligent systems such as robots are increasingly integrated into the workplace and gaining more power. Yet studies on robot power and compliance report mixed findings. To address these inconsistencies, we introduced legitimacy as people's psychological acceptance of power. Three preregistered experiments were conducted (N = 431). In Experiment 1 and 2, we manipulated power assignment (robot power vs. human power), and legitimacy of power (legitimate, illegitimate, no explanation) through competence and procedural fairness. The results showed that participants complied more to the legitimate robot power than illegitimate one. In Experiment 3, we examined whether perceptions of legitimacy would emerge naturally in more ecologically valid collaboration. Results of multigroup mediation model showed that the robot leader was perceived as less legitimate than the human leader, which accounted for the reduced compliance to the robot’s decisions. In all three experiments, people’s perceived social attributes of robots with power and their affective responses were negatively affected. Theoretical and design implications are discussed.

日本語まとめ
読み込み中…
読み込み中…
Speaking Through Chatbots or Text: How Format Shapes Information Agreement, Reactance , Environmental Awareness, and Trust
説明

Recent advances in large language models suggest that conversational agents (CAs) equipped with environmental knowledge are a promising way to promote environmental awareness. However, so far it remains unclear whether information provided by a CA outperforms static text in increasing agreement, decreasing reactance, fostering environmental awareness and trust. In this preregistered, multi-week, repeated-measures online intervention (N = 449), we varied information format (CA, text) and information valence (positive, negative, neutral) (between-subjects). Participants interacted with the CA over four consecutive weeks. Information delivered by the CA led to higher agreement and lower reactance regardless of the valence of the information and time point. Environmental awareness increased over time, especially for participants with low initial environmental awareness, but these increases were independent of format and valence. Trust in the CA also increased over time in negative and neutral valence, but not in positive valence.

日本語まとめ
読み込み中…
読み込み中…
When AI Rewrites the News: How Sentiment, Framing, and LLM Disclosure Shape Perceptions
説明

Public concern over media-driven polarization and the rise of AI-modified news has sparked interest in how sentiment and framing shape perceptions. This study examines variations in sentiment (neutral vs. extreme) and framing (balanced vs. one-sided) in LLM-transformed news, along with disclosure of LLM involvement, to assess effects on readers’ emotions, perceptions, and credibility judgments. In a 2×2 between-subjects experiment (≈180 U.S. participants) plus a baseline control (45), articles were adapted from real news and transformed with LLMs. Results show extreme sentiment worsened outcomes, heightening negative emotions and lowering trustworthiness, while framing exerted more nuanced effects. Balanced news articles with extreme sentiment elicited amplified perceptions of bias and surprise consistent with the Hostile Media Effect, where balanced coverage appears biased due to amplified opposing viewpoints. Disclosure of LLM involvement modestly improved trustworthiness without undermining fairness or credibility. Overall findings highlight the need for transparent, user-facing interventions and editorial oversight in AI-mediated journalism.

日本語まとめ
読み込み中…
読み込み中…
Simple changes to content curation algorithms affect the beliefs people form in a collaborative filtering experiment
説明

Content-curating algorithms provide a crucial service for social media users by surfacing relevant content, but they can also bring about harms when their objectives are misaligned with user values and welfare. Yet, few controlled experiments on the potential behavioral and cognitive consequences of this alignment problem exist. In a preregistered, two-wave, collaborative filtering experiment (total N=1,500), we demonstrate that simple changes to how posts are sampled and ranked can affect the beliefs people form. Our results show observable differences in two types of outcomes within statistically constructed groups: belief accuracy and consensus. We find partial support for hypotheses that the recently proposed approaches of "bridging-based ranking" and "intelligence-based ranking" promote consensus and belief accuracy, respectively. We also find that while personalized, engagement-based ranking promotes posts that participants perceive favorably, it simultaneously leads those participants to form more polarized and less accurate beliefs than any of the other algorithms considered.

日本語まとめ
読み込み中…
読み込み中…
Satisficing vs. Maximizing in Prompt Writing: Trait and Task Effects in Human–AI Interaction
説明

Generative AI systems are increasingly used for cognitively demanding tasks, yet little is known about how psychological factors shape user prompting behavior. This study investigates the role of individual satisficing tendencies in maximizing behavior when selecting prompt strategies across different task domains. In an online vignette experiment with 132 participants, individuals selected between satisficing and maximizing prompt options in five problem-solving scenarios. Satisficing tendencies were assessed using the Short Maximization Inventory, with algorithm aversion and prompt-writing competence included as controls. Linear mixed models showed that stronger satisficing tendencies were associated with reduced maximizing behavior, while higher self-reported competence predicted more maximizing. Participants maximized more in job-related and creative tasks, but satisficed more in writing and technical support tasks, suggesting that task characteristics shape prompting strategies. The results demonstrate that individual differences systematically affect interactions with generative AI. This highlights the importance of considering psychological dispositions in future research on human–AI collaboration.

日本語まとめ
読み込み中…
読み込み中…
Evolving User Profiles and Adoption of Cyborg Technologies: Evidence from a Repeated Cross-Sectional Study in Switzerland
説明

Cyborg technologies like subcutaneous implants and brain-computer interfaces, are spreading from early-adopter communities toward the general population. Understanding this transition is timely, because further diffusion no longer hinges on early adopters' transhumanist beliefs, but on preferences of the general population. Through two cross-sectional studies in Switzerland in 2023 (n=1,000) and 2025 (n=1,078), we track the diffusion process of cyborg technologies measuring adoption, associated transhumanist beliefs, risk/benefit perceptions, and demographic characteristics. Through latent profile analysis and multinomial regression, we identify three evolving profiles: Convinced, Considering, and Skeptical individuals. Over time, the Convinced and Skeptical profiles grew. The Convinced profile shifted to more moderate risk/benefit perceptions and distanced from transhumanist beliefs, growing among young individuals and across genders. Conversely, the Skeptical profile maintained high risk perceptions that still hinder adoption. These findings capture characteristics of the increasing diffusion of cyborg technologies, and can inform both technology development and future research targeting distinct user profiles.

日本語まとめ
読み込み中…
読み込み中…