AI for Researchers

会議の名前
CHI 2024
Know Your Audience: The benefits and pitfalls of generating plain language summaries beyond the "general" audience
要旨

Language models (LMs) show promise as tools for communicating science to the general public by simplifying and summarizing complex language. Because models can be prompted to generate text for a specific audience (e.g., college-educated adults), LMs might be used to create multiple versions of plain language summaries for people with different familiarities of scientific topics. However, it is not clear what the benefits and pitfalls of adaptive plain language are. When is simplifying necessary, what are the costs in doing so, and do these costs differ for readers with different background knowledge? Through three within-subjects studies in which we surface summaries for different envisioned audiences to participants of different backgrounds, we found that while simpler text led to the best reading experience for readers with little to no familiarity in a topic, high familiarity readers tended to ignore certain details in overly plain summaries (e.g., study limitations). Our work provides methods and guidance on ways of adapting plain language summaries beyond the single "general" audience.

著者
Tal August
Allen Institute for AI, Seattle, Washington, United States
Kyle Lo
Allen Institute for AI, Seattle, Washington, United States
Noah A. Smith
University of Washington, Seattle, Washington, United States
Katharina Reinecke
University of Washington, Seattle, Washington, United States
論文URL

https://doi.org/10.1145/3613904.3642289

動画
Evaluating Large Language Models on Academic Literature Understanding and Review: An Empirical Study among Early-stage Scholars
要旨

The rapid advancement of large language models (LLMs) such as ChatGPT makes LLM-based academic tools possible. However, little research has empirically evaluated how scholars perform different types of academic tasks with LLMs. Through an empirical study followed by a semi-structured interview, we assessed 48 early-stage scholars’ performance in conducting core academic activities (i.e., paper reading and literature reviews) under different levels of time pressure. Before conducting the tasks, participants received different training programs regarding the limitations and capabilities of the LLMs. After completing the tasks, participants completed an interview. Quantitative data regarding the influence of time pressure, task type, and training program on participants' performance in academic tasks was analyzed. Semi-structured interviews provided additional information on the influential factors of task performance, participants' perceptions of LLMs, and concerns about integrating LLMs into academic workflows. The findings can guide more appropriate usage and design of LLM-based tools in assisting academic work.

著者
Jiyao Wang
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Haolong Hu
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Zuyuan Wang
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Song Yan
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Youyu Sheng
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Dengbo He
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
論文URL

https://doi.org/10.1145/3613904.3641917

動画
Understanding the Role of Large Language Models in Personalizing and Scaffolding Strategies to Combat Academic Procrastination
要旨

Traditional interventions for academic procrastination often fail to capture the nuanced, individual-specific factors that underlie them. Large language models (LLMs) hold immense potential for addressing this gap by permitting open-ended inputs, including the ability to customize interventions to individuals' unique needs. However, user expectations and potential limitations of LLMs in this context remain underexplored. To address this, we conducted interviews and focus group discussions with 15 university students and 6 experts, during which a technology probe for generating personalized advice for managing procrastination was presented. Our results highlight the necessity for LLMs to provide structured, deadline-oriented steps and enhanced user support mechanisms. Additionally, our results surface the need for an adaptive approach to questioning based on factors like busyness. These findings offer crucial design implications for the development of LLM-based tools for managing procrastination while cautioning the use of LLMs for therapeutic guidance.

受賞
Honorable Mention
著者
Ananya Bhattacharjee
University of Toronto, Toronto, Ontario, Canada
Yuchen Zeng
University of Toronto, Toronto, Ontario, Canada
Sarah Yi Xu
University of Toronto, Toronto, Ontario, Canada
Dana Kulzhabayeva
University of Toronto, Toronto, Ontario, Canada
Minyi Ma
University of Toronto, Toronto, Ontario, Canada
Rachel Kornfield
Northwestern University, Chicago, Illinois, United States
Syed Ishtiaque Ahmed
University of Toronto, Toronto, Ontario, Canada
Alex Mariakakis
University of Toronto, Toronto, Ontario, Canada
Mary P. Czerwinski
Microsoft Research, Redmond, Washington, United States
Anastasia Kuzminykh
University of Toronto, Toronto, Ontario, Canada
Michael Liut
University of Toronto Mississauga, Mississauga, Ontario, Canada
Joseph Jay. Williams
University of Toronto, Toronto, Ontario, Canada
論文URL

https://doi.org/10.1145/3613904.3642081

動画
From Paper to Card: Transforming Design Implications with Generative AI
要旨

Communicating design implications is common within the HCI community when publishing academic papers, yet these papers are rarely read and used by designers. One solution is to use design cards as a form of translational resource that communicates valuable insights from papers in a more digestible and accessible format to assist in design processes. However, creating design cards can be time-consuming, and authors may lack the resources/know-how to produce cards. Through an iterative design process, we built a system that helps create design cards from academic papers using an LLM and text-to-image model. Our evaluation with designers (N=21) and authors of selected papers (N=12) revealed that designers perceived the design implications from our design cards as more inspiring and generative, compared to reading original paper texts, and the authors viewed our system as an effective way of communicating their design implications. We also propose future enhancements for AI-generated design cards.

著者
Donghoon Shin
University of Washington, Seattle, Washington, United States
Lucy Lu. Wang
University of Washington, Seattle, Washington, United States
Gary Hsieh
University of Washington, Seattle, Washington, United States
論文URL

https://doi.org/10.1145/3613904.3642266

動画
CollabCoder: A Lower-barrier, Rigorous Workflow for Inductive Collaborative Qualitative Analysis with Large Language Models
要旨

Collaborative Qualitative Analysis (CQA) can enhance qualitative analysis rigor and depth by incorporating varied viewpoints. Nevertheless, ensuring a rigorous CQA procedure itself can be both complex and costly. To lower this bar, we take a theoretical perspective to design a one-stop, end-to-end workflow, CollabCoder, that integrates Large Language Models (LLMs) into key inductive CQA stages. In the independent open coding phase, CollabCoder offers AI-generated code suggestions and records decision-making data. During the iterative discussion phase, it promotes mutual understanding by sharing this data within the coding team and using quantitative metrics to identify coding (dis)agreements, aiding in consensus-building. In the codebook development phase, CollabCoder provides primary code group suggestions, lightening the workload of developing a codebook from scratch. A 16-user evaluation confirmed the effectiveness of CollabCoder, demonstrating its advantages over the existing CQA platform. All related materials of CollabCoder, including code and further extensions, will be included in: https://gaojie058.github.io/CollabCoder/.

著者
Jie Gao
Singapore University of Technology and Design, Singapore, Singapore
Yuchen Guo
Singapore University of Technology and Design, Singapore, Singapore, Singapore
Gionnieve Lim
Singapore University of Technology and Design, Singapore, Singapore
Tianqin Zhang
Singapore University of Technology and Design, Singapore, Singapore
Zheng Zhang
University of Notre Dame, Notre Dame, Indiana, United States
Toby Jia-Jun. Li
University of Notre Dame, Notre Dame, Indiana, United States
Simon Tangi. Perrault
Singapore University of Technology and Design, Singapore, Singapore
論文URL

https://doi.org/10.1145/3613904.3642002

動画