Information gathering is an important capability that allows chatbots to understand and respond to users' needs, yet the effectiveness of LLM-powered chatbots at this task remains underexplored. Our work investigates this question in the context of clinical pre-consultation, wherein patients provide information to an intermediary before meeting with a physician to facilitate communication and reduce consultation inefficiencies. We conducted a study at a walk-in clinic with 45 patients who interacted with one of three conversational agents: a chatbot, a questionnaire, and a Wizard-of-Oz. We analyzed patients' messages using metrics adapted from Grice's maxims to assess the quality of information gathered at each conversation turn. We found that the Wizard and LLM were more successful than the questionnaire because they modified questions and asked follow-ups when participants provided unsatisfactory answers. However, the LLM did not ask nearly as many follow-up questions as the Wizard, particularly when participants provided unclear answers.
https://dl.acm.org/doi/10.1145/3706598.3713613
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)