“Ignorance is not Bliss”: Designing Personalized Moderation to Address Ableist Hate on Social Media

要旨

Disabled people on social media often experience ableist hate and microaggressions. Prior work has shown that platform moderation often fails to remove ableist hate, leaving disabled users exposed to harmful content. This paper examines how personalized moderation can safeguard users from viewing ableist comments. During interviews and focus groups with 23 disabled social media users, we presented design probes to elicit perceptions on configuring their filters of ableist speech (e.g., intensity of ableism and types of ableism) and customizing the presentation of the ableist speech to mitigate the harm (e.g., AI rephrasing the comment and content warnings). We found that participants preferred configuring their filters through types of ableist speech and favored content warnings. We surface participants’ distrust in AI-based moderation, skepticism in AI’s accuracy, and varied tolerances in viewing ableist hate. Finally, we share design recommendations to support users’ agency, mitigate harm from hate, and promote safety.

著者
Sharon Heung
Cornell Tech, New York , New York, United States
Lucy Jiang
Cornell University, Ithaca, New York, United States
Shiri Azenkot
Cornell Tech, New York, New York, United States
Aditya Vashistha
Cornell University, Ithaca, New York, United States
DOI

10.1145/3706598.3713997

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713997

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Content Moderation

Annex Hall F206
7 件の発表
2025-04-28 23:10:00
2025-04-29 00:40:00
日本語まとめ
読み込み中…