Reporting and Reviewing LLM-Integrated Systems in HCI: Challenges and Considerations

要旨

What should HCI scholars consider when reporting and reviewing papers that involve LLM-integrated systems? We interview 18 authors of LLM-integrated system papers on their authoring and reviewing experiences. We find that norms of trust-building between authors and reviewers appear to be eroded by the uncertainty of LLM behavior and hyperbolic rhetoric surrounding AI. Authors perceive that reviewers apply uniquely skeptical and inconsistent standards towards papers that report LLM-integrated systems, and mitigate mistrust by adding technical evaluations, justifying usage, and de-emphasizing LLM presence. Authors' views challenge blanket directives to report all prompts and use open models, arguing that prompt reporting is context-dependent and that proprietary model usage can be justified despite ethical concerns. Finally, some tensions in peer review appear to stem from clashes between the norms and values of HCI and ML/NLP communities, particularly around what constitutes a contribution and an appropriate level of technical rigor. Based on our findings and additional feedback from six expert HCI researchers, we present a set of considerations for authors, reviewers, and HCI communities around reporting and reviewing papers that involve LLM-integrated systems.

著者
Karla Felix Navarro
Université de Montréal, Montreal, Quebec, Canada
Eugene Syriani
Universite de Montreal, Montreal, Quebec, Canada
Ian Arawjo
Université de Montréal, Montréal, Quebec, Canada

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Trust and Perception in AI Systems

P1 - Room 118
7 件の発表
2026-04-14 20:15:00
2026-04-14 21:45:00