Large language models often produce biased or stereotypical outputs. One way to reduce this possibility is to be more inclusive in our prompts, but doing so may not come naturally to most users. Therefore, we designed a tool that coaches users to write more inclusive prompts—a strategy that leverages design friction to provide a media literacy intervention. Data from a user study (N=344) show that compared to no coaching, inclusive prompt coaching directly increased users’ awareness of algorithmic bias and their perceived prompting efficacy. It also indirectly enhanced their trust in the system and perceived trust calibration through cognitive elaboration. However, inclusive prompt coaching resulted in a less satisfying user experience. These findings have implications for ethical interventions in prompting for better communicating and combating algorithmic bias. We discuss the benefits and limitations of inclusive prompt coaching, as well as ways to balance usability for long-term adoption of generative AI systems.
ACM CHI Conference on Human Factors in Computing Systems