この勉強会は終了しました。ご参加ありがとうございました。
Voice assistants have afforded users rich interaction opportunities to access information and issue commands in a variety of contexts. However, some users feel uneasy or creeped out by voice assistants, leading to a decreased desire to use them. As there has yet to be a comprehensive understanding of the factors that cause users to perceive voice assistants as being creepy, this research developed an empirical scale to measure the creepiness inherent in various voice assistants. Utilizing prior scale creation methodologies, a 7-item Perceived Creepiness of Voice Assistants Scale (PCAS) was created and validated. The scale measures how creepy a new voice assistant would be for users of voice assistants. The scale was developed to ensure that researchers and designers can evaluate the next generation of voice assistants before such voice assistants are released to the wider public.
Conversational agents are emerging as channels for a natural and accessible interaction with digital services. Their benefits span across a wide range of usage scenarios and address visual impairments and any situational impairments that may take advantage of voice-based interactions. A few works highlighted the potential and the feasibility of adopting conversational agents for making the Web truly accessible for everyone. Yet, there is still a lack of concrete guidance in designing conversational experiences for browsing the Web. This paper illustrates a human-centered process that involved $26$ blind and visually impaired people to investigate their difficulties when using assistive technology for accessing the Web, and their attitudes and preferences on adopting conversational agents. In response to the identified challenges, the paper introduces patterns for conversational Web browsing. It also discusses design implications that can promote Conversational AI as a technology to enhance Web accessibility.
A central problem for chatbots in the customer care domain revolves around how people collaborate with the agent to achieve their own situated goals. The majority of the previous research, however, relied on experiments within artificial settings, rather than on observation of real-world interactions. Moreover, such research mostly analyzed users’ responses to communication breakdowns, rather than the wider collaboration strategies utilized during a conversation. In this paper, we qualitatively analyzed 12,477 real-world exchanges with a task-based chatbot using a Grounded Theory approach as a rigorous coding method to analyze the data. We identified two main aspects of collaboration, behavioral and conversational, and for each aspect we highlighted the different strategies that users perform to “work together” with the agent. These strategies may be utilized from the very beginning of the conversation or in response to misunderstandings in the course of ongoing interactions and may show different evolving dynamics.
Conversational agents such as chatbots have emerged as a useful resource to access real-time health information online. Perceptions of trust and credibility among chatbots have been attributed to the anthropomorphism and humanness of the chatbot design, with gender and race influencing their reception. Few existing studies have looked specifically at the diversity of chatbot avatar design related to both race, age, and gender, which may have particular significance for racially minoritized users like Black older adults. In this paper, we explored perceptions of chatbots with varying identities for health information seeking in a diary and interview study with 30 Black older adults. Our findings suggest that while racial and age likeness influence feelings of trust and comfort with chatbots, constructs such as professionalism and likeability and overall familiarity also influence reception. Based on these findings, we provide implications for designing text-based chatbots that consider Black older adults.
Music can affect the human brain and cognition. Melodies and lyrics that resonate with us can awaken our inner feelings and thoughts; being in touch with these feelings and expressing them allow us to understand ourselves better and increase our self-awareness. To support self-awareness elicited by music, we designed a novel conversational agent (CA) that guides users to become self-aware and express their thoughts when they listen to music. Moreover, we investigated two prominent design factors in the CA, proactive guidance and social information. We then conducted a 2x2 between-subjects experiment (N = 90) to investigate how the two design factors affect self-awareness, user acceptance, and mental well-being. The results of a five-day user study reveal that high proactive guidance and social information increased self-awareness, but high proactive guidance tended to influence perceived autonomy and usefulness negatively. Further, users’ subjective feedback revealed the CA's potential to support mental well-being.
AI is promising in assisting UX evaluators with analyzing usability tests, but its judgments are typically presented as non-interactive visualizations. Evaluators may have questions about test recordings, but have no way of asking them. Interactive conversational assistants provide a Q&A dynamic that may improve analysis efficiency and evaluator autonomy. To understand the full range of analysis-related questions, we conducted a Wizard-of-Oz design probe study with 20 participants who interacted with simulated AI assistants via text or voice. We found that participants asked for five categories of information: user actions, user mental model, help from the AI assistant, product and task information, and user demographics. Those who used the text assistant asked more questions, but the question lengths were similar. The text assistant was perceived as significantly more efficient, but both were rated equally in satisfaction and trust. We also provide design considerations for future conversational AI assistants for UX evaluation.