Working from home has become a mainstream work practice in many organizations during the COVID-19 pandemic. While remote work has received much scholarly and public attention over the years, we still know little about how people with disabilities engage in remote work from their homes and what access means in this context. To understand and rethink accessibility in remote work, the present paper studies work-from-home practices of neurodivergent professionals who have Autism Spectrum Disorder, Attention Deficit Hyperactivity Disorder, learning disabilities (e.g., dyslexia) and psychosocial disabilities (e.g., anxiety, depression). We report on interviews with 36 neurodivergent professionals who are working from home during the pandemic. Our findings reveal that while working from home, neurodivergent professionals create accessible physical and digital workspaces, negotiate accessible communication practices, and reconcile tensions between productivity and wellbeing. Our analysis reconsiders what access means in remote work for neurodivergent professionals and offers practical insights for inclusive work practices and accessibility improvements in remote collaboration tools.
https://doi.org/10.1145/3449282
There has been a growing interest in CSCW and HCI to understand the experiences of programmers in the workplace. However, the large majority of these studies have focused on sighted programmers and as a result, the experiences of programmers with visual impairments in professional contexts remain understudied. We address this gap by reporting on findings from semi-structured interviews with 22 programmers with visual impairments. We found that programmers with visual impairments interact with a complex ecosystem of tools and a significant part of their job entails performing work to overcome the accessibility challenges inherent in this ecosystem. Furthermore, we found that the visual nature of various programming activities impedes collaboration, which necessitates the co-creation of new work practices through a series of sociotechnical interactions. These sociotechnical interactions often required invisible work and articulation work on the part of the programmers with visual impairments.
https://doi.org/10.1145/3449203
To understand the lived experience of how people with disabilities telework, 25 people were interviewed. The participants included people who are blind or low vision, deaf or hard of hearing, neurodiverse, have limited mobility/dexterity, and have chronic health issues. The interviews focused on how they used video calling, screen sharing, and collaborative editing technologies to accomplish their telework. The interviews found ways in which design choices made in telework technologies interact with people’s abilities, especially those who are blind or low vision, since the tools rely heavily on the visual channel to enable remote collaboration. A central theme emerged around how design choices made in telework technologies affect the digital representation of people’s online activities in the video call interface: those who turn off their video (because they are blind or do not want to expend the cognitive effort to present themselves over video) are relegated to a blank rectangular frame with their name; those who are deaf and speak through a sign language interpreter never show up in interfaces that use active speaker detection to choose which video streams to display. Users with disabilities may avoid using screen sharing and collaborative editing tools which “leak” cues that disclose their disabilities. Because the interviews were conducted during the first month of the COVID-19 pandemic response, they also provide a preview of how the sudden shift to pervasive teleworking affected their telework experience.
https://doi.org/10.1145/3449104
Even when they are able to secure employment, people with cognitive disabilities typically encounter significant difficulties in the workplace. In this paper, we focus on Mixed-Ability workplaces: work settings in which people without disabilities and with different types of disabilities collaborate on a daily basis. The case study for our exploratory research is a university library that has been able to support a mixed-ability work setting for over four years. We describe how a theory from cognitive linguistics (Conceptual Metaphor Theory) can be used to explore the challenges that people encounter in mixed-ability workplaces, identify the cognitive processes that differ between neurotypical team leaders and workers with cognitive disabilities, and translate these findings into design recommendations for embodied technologies that support mixed-ability workplaces.
https://doi.org/10.1145/3479528
Prior work on AI-enabled assistive technology (AT) for people with visual impairments (VI) has treated navigation largely as an independent activity. Consequently, much effort has focused on providing individual users with wayfinding details about the environment, including information on distances, proximity, obstacles, and landmarks. However, independence is also achieved by people with VI through interacting with others, such as in collaboration with sighted guides. Drawing on the concept of interdependence, this research presents a systematic analysis of sighted guiding partnerships. Using interaction analysis as our primary mode of data analysis, we conducted an empirical, qualitative study with 4 couples, each made up of person with a vision impairment and their sighted guide. Our results show how pairs used interactional resources such as turn-taking and body movements to both co-constitute a common space for navigation, and repair moments of rupture to this space. This work is used to present an exemplary case of interdependence and draws out implications for designing AI-enabled AT that shifts the emphasis away from independent navigation, and towards the carefully coordinated actions between people navigating together.
https://doi.org/10.1145/3449143
Accessibility assessments typically focus on determining a binary measurement of task performance success/failure, and often neglect to acknowledge the nuances of those interactions. Although a large population of blind people find smartphone interactions possible, many experiences take a significant toll and can have a lasting negative impact on the individual and their willingness to step out of technological comfort zones. There is a need to assist and support individuals with the adoption and learning process of new tasks to mitigate these negative experiences. We contribute with a human-powered nonvisual task assistant for smartphones to provide pervasive assistance. We argue, in addition to success, one must carefully consider promoting and evaluating factors such as self-efficacy and the belief in one's abilities to control and learn to use technology. In this paper, we show effective assistant positively affects self-efficacy when performing new tasks with smartphones, affects perceptions of accessibility and enables systemic task-based learning.
https://doi.org/10.1145/3449188
Real-time captioning is a critical accessibility tool for many d/Deaf and hard of hearing (DHH) people. While the vast majority of captioning work has focused on formal settings and technical innovations, in contrast, we investigate captioning for informal, interactive small-group conversations, which have a high degree of spontaneity and foster dynamic social interactions. This paper reports on semi-structured interviews and design probe activities we conducted with 15 DHH participants to understand their use of existing real-time captioning services and future design preferences for both in-person and remote small-group communication. We found that our participants’ experiences of captioned small-group conversations are shaped by social, environmental, and technical considerations (e.g., interlocutors’ pre-established relationships, the type of captioning displays available, and how far captions lag behind speech). When considering future captioning tools, participants were interested in greater feedback on non-speech elements of conversation (e.g., speaker identity, speech rate, volume) both for their personal use and to guide hearing interlocutors towards more accessible communication. We contribute a qualitative account of DHH people’s real-time captioning experiences during small-group conversation and future design considerations to better support the groups being captioned, both in person and online.
https://doi.org/10.1145/3479578
Live streaming refers to the broadcast of real-time videos, allowing people to have synchronous interactions. While researchers’ interest in live streaming has increased recently, the accessibility of live streaming for people with visual impairments is still under-examined. Further studies are necessary to gain a better understanding of how streamers with visual impairments (SVI) engage in various activities on live streaming platforms. Based on semi-structured interviews with 14 participants, we identified SVI’s motivations for live streaming, their unique interactions with videos and people on live streaming platforms, and the challenges they face during live streaming. Our analysis of the identified themes revealed the absence of an SVI-centered community and accessibility issues for SVI while learning to live stream, use tools, and interact with people. Based on the results of this study, we present design opportunities to better support SVI on live streaming platforms.
https://doi.org/10.1145/3476038