“It Became My Buddy, But I’m Not Afraid to Disagree”: A Multi-Session Study of UX Evaluators Collaborating with Conversational AI Assistants

要旨

AI-assisted usability analysis can potentially reduce the time and effort of finding usability problems, yet little is known about how AI's perceived expertise influences evaluators' analytic strategies and perceptions over time. We ran a within-subjects, five-session study (six hours per participant) with 12 professional UX evaluators who worked with two conversational assistants designed to appear novice- or expert-like (differing in suggestion quantity and response accuracy). We logged behavioral measures (number of passes, suggestion acceptance rate), collected subjective ratings (trust, perceived efficiency), and conducted semi-structured interviews. Participants experienced an initial novelty effect and a subsequent dip in trust that recovered over time. Their efficiency improved as they shifted from a two-pass to a one-pass video inspection approach. Evaluators ultimately rated the experienced CA as significantly more efficient, trustworthy, and comprehensive, despite not perceiving expertise differences early on. We conclude with design implications for adapting AI expertise to enable calibrated human-AI collaboration.

著者
Emily Kuang
York University, Toronto, Ontario, Canada
Ehsan Jahangirzadeh Soure
University of Waterloo, Waterloo, Ontario, Canada
Luyao Shen
Computational Media and Arts Thrust, The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Nitesh Goyal
Google Research, New York, New York, United States
Mingming Fan
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Kristen Shinohara
Rochester Institute of Technology, Rochester, New York, United States

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: LLM Interaction & Conversational Agents

P1 - Room 113
7 件の発表
2026-04-14 20:15:00
2026-04-14 21:45:00