Collaboration with Conversational AI Assistants for UX Evaluation: Questions and How to Ask them (Voice vs. Text)

要旨

AI is promising in assisting UX evaluators with analyzing usability tests, but its judgments are typically presented as non-interactive visualizations. Evaluators may have questions about test recordings, but have no way of asking them. Interactive conversational assistants provide a Q&A dynamic that may improve analysis efficiency and evaluator autonomy. To understand the full range of analysis-related questions, we conducted a Wizard-of-Oz design probe study with 20 participants who interacted with simulated AI assistants via text or voice. We found that participants asked for five categories of information: user actions, user mental model, help from the AI assistant, product and task information, and user demographics. Those who used the text assistant asked more questions, but the question lengths were similar. The text assistant was perceived as significantly more efficient, but both were rated equally in satisfaction and trust. We also provide design considerations for future conversational AI assistants for UX evaluation.

著者
Emily Kuang
Rochester Institute of Technology, Rochester, New York, United States
Ehsan Jahangirzadeh Soure
University of Waterloo, Waterloo, Ontario, Canada
Mingming Fan
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Jian Zhao
University of Waterloo, Waterloo, Ontario, Canada
Kristen Shinohara
Rochester Institute of Technology, Rochester, New York, United States
論文URL

https://doi.org/10.1145/3544548.3581247

動画

会議: CHI 2023

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

セッション: Conversational Agents

Hall F
6 件の発表
2023-04-26 20:10:00
2023-04-26 21:35:00