Know Your Audience: The benefits and pitfalls of generating plain language summaries beyond the "general" audience

要旨

Language models (LMs) show promise as tools for communicating science to the general public by simplifying and summarizing complex language. Because models can be prompted to generate text for a specific audience (e.g., college-educated adults), LMs might be used to create multiple versions of plain language summaries for people with different familiarities of scientific topics. However, it is not clear what the benefits and pitfalls of adaptive plain language are. When is simplifying necessary, what are the costs in doing so, and do these costs differ for readers with different background knowledge? Through three within-subjects studies in which we surface summaries for different envisioned audiences to participants of different backgrounds, we found that while simpler text led to the best reading experience for readers with little to no familiarity in a topic, high familiarity readers tended to ignore certain details in overly plain summaries (e.g., study limitations). Our work provides methods and guidance on ways of adapting plain language summaries beyond the single "general" audience.

著者
Tal August
Allen Institute for AI, Seattle, Washington, United States
Kyle Lo
Allen Institute for AI, Seattle, Washington, United States
Noah A. Smith
University of Washington, Seattle, Washington, United States
Katharina Reinecke
University of Washington, Seattle, Washington, United States
論文URL

https://doi.org/10.1145/3613904.3642289

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: AI for Researchers

313C
5 件の発表
2024-05-15 18:00:00
2024-05-15 19:20:00