The application of Artificial Intelligence (AI) across a wide range of domains comes with both high expectations of its benefits and dire predictions of misuse. While AI systems have largely been driven by a technology-centered design approach, the potential societal consequences of AI have mobilized both HCI and AI researchers towards researching human-centered artificial intelligence (HCAI). However, there remains considerable ambiguity about what it means to frame, design and evaluate HCAI. This paper presents a critical review of the large corpus of peer-reviewed literature emerging on HCAI in order to characterize what the community is defining as HCAI. Our review contributes an overview and map of HCAI research based on work that explicitly mentions the terms ‘human-centered artificial intelligence’ or ‘human-centered machine learning’ or their variations, and suggests future challenges and research directions. The map reveals the breadth of research happening in HCAI, established clusters and the emerging areas of Interaction with AI and Ethical AI. The paper contributes a new definition of HCAI, and calls for greater collaboration between AI and HCI research, and new HCAI constructs.
https://doi.org/10.1145/3544548.3580959
Artificial intelligence (AI) presents new challenges for the user experience (UX) of products and services. Recently, practitioner-facing resources and design guidelines have become available to ease some of these challenges. However, little research has investigated if and how these guidelines are used, and how they impact practice. In this paper, we investigated how industry practitioners use the People + AI Guidebook. We conducted interviews with 31 practitioners (i.e., designers, product managers) to understand how they use human-AI guidelines when designing AI-enabled products. Our findings revealed that practitioners use the guidebook not only for addressing AI's design challenges, but also for education, cross-functional communication, and for developing internal resources. We uncovered that practitioners desire more support for early phase ideation and problem formulation to avoid AI product failures. We discuss the implications for future resources aiming to help practitioners in designing AI products.
https://doi.org/10.1145/3544548.3580900
Language models are increasingly attracting interest from writers. However, such models lack long-range semantic coherence, limiting their usefulness for longform creative writing. We address this limitation by applying language models hierarchically, in a system we call Dramatron. By building structural context via prompt chaining, Dramatron can generate coherent scripts and screenplays complete with title, characters, story beats, location descriptions, and dialogue. We illustrate Dramatron’s usefulness as an interactive co-creative system with a user study of $15$ theatre and film industry professionals. Participants co-wrote theatre scripts and screenplays with Dramatron and engaged in open-ended interviews. We report reflections both from our interviewees and from independent reviewers who critiqued performances of several of the scripts to illustrate how both Dramatron and hierarchical text generation could be useful for human-machine co-creativity. Finally, we discuss the suitability of Dramatron for co-creativity, ethical considerations---including plagiarism and bias---and participatory models for the design and deployment of such tools.
https://doi.org/10.1145/3544548.3581225
In this article, we propose a conceptual and methodological framework for measuring the impact of the introduction of AI systems in decision settings, based on the concept of technological dominance, i.e. the influence that an AI system can exert on human judgment and decisions. We distinguish between a negative component of dominance (automation bias) and a positive one (algorithm appreciation) by focusing on and systematizing the patterns of interaction between human judgment and AI support, or reliance patterns, and their associated cognitive effects. We then define statistical approaches for measuring these dimensions of dominance, as well as corresponding qualitative visualizations. By reporting about four medical case studies, we illustrate how the proposed methods can be used to inform assessments of dominance and of related cognitive biases in real-world settings. Our study lays the groundwork for future investigations into the effects of introducing AI support into naturalistic and collaborative decision-making.
https://doi.org/10.1145/3544548.3581095
Visual content must be labeled to facilitate navigation and retrieval, or provide ground truth data for supervised machine learning approaches. The efficiency of labeling techniques is crucial to produce numerous qualitative labels, but existing techniques remain sparsely evaluated. We systematically evaluate the efficiency of tagging and browsing tasks in relation to the number of images displayed, interaction modes, and the image visual complexity. Tagging consists in focusing on a single image to assign multiple labels (image-oriented strategy), and browsing in focusing on a single label to assign to multiple images (label-oriented strategy). In a first experiment, we focus on the nudges inducing participants to adopt one of the strategies (n=18). In a second experiment, we evaluate the efficiency of the strategies (n=24). Results suggest an image-oriented strategy (tagging task) leads to shorter annotation times, especially for complex images, and participants tend to adopt it regardless of the conditions they face.
https://doi.org/10.1145/3544548.3580926
Although menu selection has been extensively studied in HCI, most existing studies have focused on sighted users, leaving blind users' menu selection under-studied. In this paper, we propose a computational model that can simulate blind users’ menu selection performance and strategies, including the way they use techniques like swiping, gliding, and direct touch. We assume that selection behavior emerges as an adaptation to the user's memory of item positions based on experience and feedback from the screen reader. A key aspect of our model is a model of long-term memory, predicting how a user recalls and forgets item position based on previous menu selections. We compare simulation results predicted by our model against data obtained in an empirical study with ten blind users. The model correctly simulated the effect of the menu length and menu arrangement on selection time, the action composition, and the menu selection strategy of the users.
https://doi.org/10.1145/3544548.3580640