Conversational Agents

会議の名前
CHI 2024
Apple’s Knowledge Navigator: Why Doesn’t that Conversational Agent Exist Yet?
要旨

Apple’s 1987 Knowledge Navigator video contains a vision of a sophisticated digital personal assistant, but the natural human-agent conversational dialog shown does not currently exist. To investigate why, the authors analyzed the video using three theoretical frameworks: the DiCoT framework, the HAT Game Analysis framework, and the Flows of Power framework. These were used to codify the human-agent interactions and classify the agent’s capabilities. While some barriers to creating such agents are technological, other barriers arise from privacy, social and situational factors, trust, and the financial business case. The social roles and asymmetric interactions of the human and agent are discussed in the broader context of HAT research, along with the need for a new term for these agents that does not rely on a human social relationship metaphor. This research offers designers of conversational agents a research roadmap to build more highly capable and trusted non-human teammates.

受賞
Honorable Mention
著者
Amanda K.. Newendorp
Iowa State University, Ames, Iowa, United States
Mohammadamin Sanaei
Iowa State University, Ames, Iowa, United States
Arthur J. Perron
Iowa State University, Ames, Iowa, United States
Hila Sabouni
Iowa State University, Ames, Iowa, United States
Nikoo Javadpour
Iowa State University , AMES, Iowa, United States
Maddie Sells
Iowa State University , Ames, Iowa, United States
Katherine Nelson
Iowa State University, Ames, Iowa, United States
Michael Dorneich
Iowa State University, Ames, IA, Iowa, United States
Stephen B.. Gilbert
Iowa State University, Ames, Iowa, United States
論文URL

doi.org/10.1145/3613904.3642739

動画
Towards Designing a Question-Answering Chatbot for Online News: Understanding Questions and Perspectives
要旨

Large Language Models (LLMs) have created opportunities for designing chatbots that can support complex question-answering (QA) scenarios and improve news audience engagement. However, we still lack an understanding of what roles journalists and readers deem fit for such a chatbot in newsrooms. To address this gap, we first interviewed six journalists to understand how they answer questions from readers currently and how they want to use a QA chatbot for this purpose. To understand how readers want to interact with a QA chatbot, we then conducted an online experiment (N=124) where we asked each participant to read three news articles and ask questions to either the author(s) of the articles or a chatbot. By combining results from the studies, we present alignments and discrepancies between how journalists and readers want to use QA chatbots and propose a framework for designing effective QA chatbots in newsrooms.

著者
Md Naimul Hoque
University of Maryland, College Park, Maryland, United States
Ayman A. Mahfuz
The University of Texas at Austin, Austin, Texas, United States
Mayukha Sridhatri. Kindi
University of Maryland, College Park, Maryland, United States
Naeemul Hassan
University of Maryland, College Park, Maryland, United States
論文URL

doi.org/10.1145/3613904.3642007

動画
Cooking With Agents: Designing Context-aware Voice Interaction
要旨

Voice Agents (VAs) are touted as being able to help users in complex tasks such as cooking and interacting as a conversational partner to provide information and advice while the task is ongoing. Through conversation analysis of 7 cooking sessions with a commercial VA, we identify challenges caused by a lack of contextual awareness leading to irrelevant responses, misinterpretation of requests, and information overload. Informed by this, we evaluated 16 cooking sessions with a wizard-led context-aware VA. We observed more fluent interaction between humans and agents, including more complex requests, explicit grounding within utterances, and complex social responses. We discuss reasons for this, the potential for personalisation, and the division of labour in VA communication and proactivity. Then, we discuss the recent advances in generative models and the VAs interaction challenges. We propose limited context awareness in VAs as a step toward explainable, explorable conversational interfaces.

受賞
Best Paper
著者
Razan Jaber
Stockholm University , Stockholm, Sweden
Sabrina Zhong
University College London, London, United Kingdom
Sanna Kuoppamäki
KTH Royal Institute of Technology, Stockholm, Sweden
Aida Hosseini
KTH Royal Institute of Technology, Stockholm, Sweden
Iona Gessinger
University College Dublin, Dublin, Ireland
Duncan P. Brumby
University College London, London, United Kingdom
Benjamin R.. Cowan
University College Dublin, Dublin, Ireland
Donald McMillan
Stockholm University , Stockholm, Sweden
論文URL

doi.org/10.1145/3613904.3642183

動画
"It's a Fair Game", or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents
要旨

The widespread use of Large Language Model (LLM)-based conversational agents (CAs), especially in high-stakes domains, raises many privacy concerns. Building ethical LLM-based CAs that respect user privacy requires an in-depth understanding of the privacy risks that concern users the most. However, existing research, primarily model-centered, does not provide insight into users' perspectives. To bridge this gap, we analyzed sensitive disclosures in real-world ChatGPT conversations and conducted semi-structured interviews with 19 LLM-based CA users. We found that users are constantly faced with trade-offs between privacy, utility, and convenience when using LLM-based CAs. However, users' erroneous mental models and the dark patterns in system design limited their awareness and comprehension of the privacy risks. Additionally, the human-like interactions encouraged more sensitive disclosures, which complicated users' ability to navigate the trade-offs. We discuss practical design guidelines and the needs for paradigm shifts to protect the privacy of LLM-based CA users.

著者
Zhiping Zhang
Khoury College of Computer Sciences, Boston, Massachusetts, United States
Michelle Jia
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Hao-Ping (Hank) Lee
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Bingsheng Yao
Rensselaer Polytechnic Institute, Troy, New York, United States
Sauvik Das
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Ada Lerner
Northeastern University, Boston, Massachusetts, United States
Dakuo Wang
Northeastern University, Boston, Massachusetts, United States
Tianshi Li
Northeastern University, Boston, Massachusetts, United States
論文URL

doi.org/10.1145/3613904.3642385

動画
Metaphors in Voice User Interfaces: A Slippery Fish
要旨

We explore a range of different metaphors used for Voice User Interfaces (VUIs) by designers, end-users, manufacturers, and researchers using a novel framework derived from semi-structured interviews and a literature review. We focus less on the well-established idea of metaphors as a way for interface designers to help novice users learn how to interact with novel technology, and more on other ways metaphors can be used. We find that metaphors people use are contextually fluid, can change with the mode of conversation, and can reveal differences in how people perceive VUIs compared to other devices. Not all metaphors are helpful, and some may be offensive. Analyzing this broader class of metaphors can help understand, perhaps even predict problems. Metaphor analysis can be a low-cost tool to inspire design creativity and facilitate complex discussions about sociotechnical issues, enabling us to spot potential opportunities and problems in the situated use of technologies.

著者
Smit Desai
University of Illinois, Urbana-Champaign, Champaign, Illinois, United States
Michael Bernard. Twidale
University of Illinois at Urbana-Champaign, Urbana, Illinois, United States
動画