Gathering information about AI systems is essential for contesting their use; it forms the basis of arguments about how AI is causing harm. Information thus plays a central role for advocates like lawyers, journalists, and auditors contesting harmful AI systems. However, there is little systematic understanding of how these actors, many of whom are newly encountering AI in their advocacy work, access and use information effectively in this process. Understanding this information work can offer valuable insights for supporting effective contestation of harmful AI systems. To better understand information work in AI contestation, we interviewed 18 advocates in the United States (US) who have contested the use of AI in high-stakes domains, such as public benefits and housing. We characterize advocates' strategies for accessing information that is useful for contestation, including a range of creative yet resource-intensive and risky workarounds that they use to overcome opacity. We discuss implications of our findings for the effectiveness of popular transparency policy strategies in the US and offer additional ways to support the social fabric that makes advocates' information work effective.
ACM CHI Conference on Human Factors in Computing Systems