Empathy Is All You Need: How a Conversational Agent Should Respond to Verbal Abuse
説明

With the popularity of AI-infused systems, conversational agents (CAs) are becoming essential in diverse areas, offering new functionality and convenience, but simultaneously, suffering misuse and verbal abuse. We examine whether conversational agents' response styles under varying abuse types influence those emotions found to mitigate peoples' aggressive behaviors, involving three verbal abuse types (Insult, Threat, Swearing) and three response styles (Avoidance, Empathy, Counterattacking). Ninety-eight participants were assigned to one of the abuse type conditions, interacted with the three spoken (voice-based) CAs in turn, and reported their feelings about guiltiness, anger, and shame after each session. The results show that the agent's response style has a significant effect on user emotions. Participants were less angry and more guilty with the empathy agent than the other two agents. Furthermore, we investigated the current status of commercial CAs' responses to verbal abuse. Our study findings have direct implications for the design of conversational agents.

日本語まとめ
読み込み中…
読み込み中…
FrownOnError: Interrupting Responses from Smart Speakers by Facial Expressions
説明

In the conversations with smart speakers, misunderstandings of users' requests lead to erroneous responses. We propose FrownOnError, a novel interaction technique that enables users to interrupt the responses by intentional but natural facial expressions. This method leverages the human nature that the facial expression changes when we receive unexpected responses. We conducted a first user study (N=12) to understand users' intuitive reactions to the correct and incorrect responses. Our results reveal the significant difference in the frequency of occurrence and intensity of users' facial expressions between two conditions, and frowning and raising eyebrows are intuitive to perform and easy to control. Our second user study (N=16) evaluated the user experience and interruption efficiency of FrownOnError and the third user study (N=12) explored suitable conversation recovery strategies after the interruptions. Our results show that FrownOnError can be accurately detected (precision: 97.4\%, recall: 97.6\%), provides the most timely interruption compared to the baseline methods of wake-up word and button press, and is rated as most intuitive and easiest to be performed by users.

日本語まとめ
読み込み中…
読み込み中…
OMOY: A Handheld Robotic Gadget that Shifts its Weight to Express Emotions and Intentions
説明

A robotic gadget that is equipped with a movable weight inside its body is developed. By controlling the movement of the internal weight together with other robotic behaviors such as hand gestures or speech dialogues, it is expected that emotional and/or intentional messaging between users is enhanced. To gain knowledge for designing effective weight shifts, an elicitation study was conducted to investigate how users holding this gadget in their hand interpreted its 36 weight shift patterns generated by setting four basic movement parameters (target position, trajectory, speed, and repetition). Results present mappings between these parameters and the emotional perception of the users. Furthermore, specific weight shift patterns that can express certain human emotions and intentions are revealed. These findings will be useful for designing effective weight shifts to enhance emotional and intentional messaging between users. This study attempts to open a new dimension for the expression capability of robotic gadgets.

日本語まとめ
読み込み中…
読み込み中…
Co-Designing Checklists to Understand Organizational Challenges and Opportunities around Fairness in AI
説明

Many organizations have published principles intended to guide the ethical development and deployment of AI systems; however, their abstract nature makes them difficult to operationalize. Some organizations have therefore produced AI ethics checklists, as well as checklists for more specific concepts, such as fairness, as applied to AI systems. But unless checklists are grounded in practitioners' needs, they may be misused. To understand the role of checklists in AI ethics, we conducted an iterative co-design process with 48 practitioners, focusing on fairness. We co-designed an AI fairness checklist and identified desiderata and concerns for AI fairness checklists in general. We found that AI fairness checklists could provide organizational infrastructure for formalizing ad-hoc processes and empowering individual advocates. We highlight aspects of organizational culture that may impact the efficacy of AI fairness checklists, and suggest future design directions.

日本語まとめ
読み込み中…
読み込み中…
Effects of Persuasive Dialogues: Testing Bot Identities and Inquiry Strategies
説明

Intelligent conversational agents, or chatbots, can take on various identities and are increasingly engaging in more human-centered conversations with persuasive goals. However, little is known about how identities and inquiry strategies influence the conversation's effectiveness. We conducted an online study involving 790 participants to be persuaded by a chatbot for charity donation. We designed a two by four factorial experiment (two chatbot identities and four inquiry strategies) where participants were randomly assigned to different conditions. Findings showed that the perceived identity of the chatbot had significant effects on the persuasion outcome (i.e., donation) and interpersonal perceptions (i.e., competence, confidence, warmth, and sincerity). Further, we identified interaction effects among perceived identities and inquiry strategies. We discuss the findings for theoretical and practical implications for developing ethical and effective persuasive chatbots. Our published data, codes, and analyses serve as the first step towards building competent ethical persuasive chatbots.

日本語まとめ
読み込み中…
読み込み中…