Misinformation interventions are often evaluated under ideal conditions, yet real-world systems are rarely flawless. We report on an online experiment ($N=1,004$) comparing five state-of-the-art interventions -- inoculation, accuracy prompt, community note, fact-check, and indicators -- across TikTok, Telegram, and X. We examined efficacy and user perceptions under flawless and erroneous implementations. Misinformation accompanied by fact-checks and indicators was rated as significantly less accurate, while community notes showed weaker effects. Modality did not significantly influence intervention efficacy and had only minor effects on user acceptance. Community notes, fact-checks, and indicators were rated as more helpful but also more annoying than the less informative accuracy prompts. Notably, the efficacy of interventions disappeared under erroneous conditions. This highlights the crucial role of intervention quality in fostering trust and acceptance. Our findings provide (1) a cross-platform evaluation of interventions and (2) empirical evidence that accuracy and reliability are crucial in complex social media environments.
ACM CHI Conference on Human Factors in Computing Systems