Research in HCI and autism has become more focused on involving autistic adults in technological design. In this paper, we present the results of a scoping review analysis of 11 projects across 18 papers that focused on including autistic adults in the design of technology that impacts their lives. This paper contributes a deeper understanding of how autistic adults were involved in participatory design processes. Our findings reveal mixed positions on how the lived autistic perspective was harnessed to direct the application of topics and technologies chosen. Most projects employed infrastructures to enhance participation (e.g., providing multiple modes to participate or employing a tailored methodology). We pose future opportunities for autistic involvement, for example, in topics and technologies where autistic research is employed (e.g., autism diagnosis and machine learning), reviewing the importance of formal diagnosis for inclusion, and harnessing the multiple expertise of autistic adults.
https://dl.acm.org/doi/10.1145/3706598.3713961
Autistic individuals often experience negative self-talk (NST), leading to increased anxiety and depression. While therapy is recommended, it presents challenges for many autistic individuals. Meanwhile, a growing number are turning to large language models (LLMs) for mental health support. To understand how autistic individuals perceive AI's role in coping with NST, we surveyed 200 autistic adults and interviewed practitioners. We also analyzed LLM responses to participants' hypothetical prompts about their NST. Our findings show that participants view LLMs as useful for managing NST by identifying and reframing negative thoughts. Both participants and practitioners recognize AI's potential to support therapy and emotional expression. Participants also expressed concerns about LLMs' understanding of neurodivergent thought patterns, particularly due to the neurotypical bias of LLMs. Practitioners critiqued LLMs' responses as overly wordy, vague, and overwhelming. This study contributes to the growing research on AI-assisted mental health support, with specific insights for supporting the autistic community.
https://dl.acm.org/doi/10.1145/3706598.3714287
Large Language Models (LLMs) like ChatGPT, used by over 200 million people monthly, are increasingly applied in disability contexts, including autism research. However, there has been limited exploration of the potential biases these models hold about autistic people. To explore what biases ChatGPT demonstrates about autistic people, we prompted GPT-3.5 to create three personas, choose one to be autistic, and explain its reasoning for this choice and any suggested changes to the persona description. Our quantitative analysis of the chosen personas indicates that gender and profession influenced GPT's choices. Additionally, our qualitative analysis revealed ChatGPT's tendency to highlight the importance of representation while simultaneously perpetuating mostly negative biases about autistic people, illustrating a "bias paradox," a concept adapted from feminist studies. By applying this concept to LLMs, we provide a lens through which researchers might identify, understand, and address fundamental challenges in the development of responsible and inclusive AI.
https://dl.acm.org/doi/10.1145/3706598.3713420
Access to public spaces is of the utmost importance for social cohesion, inclusion, and civic engagement. Nevertheless, a large majority of public spaces remain incredibly uncomfortable environments for neurodivergent individuals due to, for instance, the unpredictability of such spaces and the sensory stimuli within them. Smart City technologies present an exciting opportunity to improve the accessibility and enjoyment of the spaces where they are deployed by, for instance, offering users the ability to customise a space to their specific sensory needs. However, the research topic of public space technologies for neurodivergent individuals remains scattered and sparsely documented. This critical review analyses the existing domains of inquiry, contributing a theoretical framework based on Spatial Justice and Neuroqueer Technoscience and suggests future research avenues informed by this framework. We advocate for the participatory co-creation of a neurodivergent-affirming landscape of public space technologies that both support neurodivergent needs and promote neurodivergent joy.
https://dl.acm.org/doi/10.1145/3706598.3713539
Neurotypical modes of existence and interaction are enforced through traditional social norms, compelling individuals who diverge from these norms, such as those who are neurodivergent, to conform through ``masking.'' Technology research and design often also ascribe to these conventional norms, creating technology that reinforces neurodivergent people's need to mask. In this research, we turn to neurodivergent communities online to develop an understanding of masking behaviors. We adopt a two-tiered research approach consisting of a qualitative thematic analysis of TikTok videos and a survey questionnaire. Through this work, we initiate discussion on the complexities of neurodivergent masking as a pervasive social adaptation. We urge HCI researchers to critically reframe intervention design and research practices that may either perpetuate or seek to address masking.
https://dl.acm.org/doi/10.1145/3706598.3714094
While many technologies have been developed for facilitating interaction between neurodivergent and neurotypical people to bridge communication differences and reduce social exclusion, most focus on supporting and teaching neurodivergent people to adapt to neurotypical standards and norms. To promote a more balanced approach to bridging the social gap, we conducted a 5-day diary study and semi-structured interviews with 16 participants (8 neurotypical and 8 with intellectual disability) to examine the current factors and barriers to their social interactions and to explore the design of social support chatbot systems. Our findings revealed diverging views between the groups on factors they valued in their interaction, and identified social uncertainty and differing social expectations as the main barriers to successful interactions. Based on the results, we outline three pitfalls that social support chatbots can fall into if not designed mindfully, and suggest design approaches that promote bidirectional social support and interdependence.
https://dl.acm.org/doi/10.1145/3706598.3713344
Social play is crucial for children's well-being and development. However, many social play technologies fail to address the specific characteristics and needs of neurodiverse play and often overlook divergent play styles. To address this, we first conducted a co-design study with a neurodiverse group of 7 children (Age 7-8) and, based on insights from these sessions, then developed a prototype, ChromaConnect, that allowed children to express their play style to one another during play. To evaluate ChromaConnect's ability to support neurodiverse social play in different contexts, we observed children using it in both structured and unstructured play settings. Our findings show that ChromaConnect enabled children to create a common language of play, made divergent play modes more visible, and facilitated explicit expression of social play initiation. We discuss how these findings could be used to design `accompanying social play things' that are more inclusive of neurodiverse play characteristics and divergent play styles.
https://dl.acm.org/doi/10.1145/3706598.3713738