Profile pictures can convey rich social signals that are often inaccessible to blind and low vision screen reader users. Although there have been efforts to understand screen reader users’ preferences for alternative (alt) text descriptions when encountering images online, profile pictures evoke distinct information needs. We conducted semi-structured interviews with 16 screen reader users to understand their preferences for various styles of profile picture image descriptions in different social contexts. We also interviewed seven sighted individuals to explore their thoughts on authoring alt text for profile pictures. Our findings suggest that detailed image descriptions and user narrated alt text can provide screen reader users enjoyable and informative experiences when exploring profile pictures. We also identified mismatches between how sighted individuals would author alt text with what screen reader users prefer to know about profile pictures. We discuss the implications of our findings for social applications that support profile pictures.
Online dating is a space where autistic individuals can find romantic partners with reduced social demands. Autistic individuals are often expected to adapt their behaviors to the social norms underlying the online dating platform to appear as desirable romantic partners. However, given that their autistic traits can lead them to different expectations of dating, it is uncertain whether conforming their behaviors to the norm will guide them to the person they truly want. In this paper, we explored the perceptions and expectations of autistic adults in online dating through interviews and workshops. We found that autistic people desired to know whether they behaved according to the platform's norms. Still, they expected to keep their unique characteristics rather than unconditionally conform to the norm. We conclude by providing suggestions for designing inclusive online dating experiences that could foster self-guided decisions of autistic users and embrace their unique characteristics.
We present the design and creation of a disability-first dataset, “BIV-Priv,” which contains 728 images and 728 videos of 14 private categories captured by 26 blind participants to support downstream development of artificial intelligence (AI) models. While best practices in dataset creation typically attempt to eliminate private content, some applications require such content for model development. We describe our approach in creating this dataset with private content in an ethical way, including using props rather than participants’ own private objects and balancing multi-disciplinary perspectives (e.g., accessibility, privacy, computer vision) to meet the tangible metrics (e.g., diversity, category, amount of content) to support AI innovations. We observed challenges that our participants encountered during the data collection, including accessibility issues (e.g., understanding foreground vs. background object placement) and issues due to the sensitive nature of the content (e.g., discomfort in capturing some props such as condoms around family members).
High-tech augmentative and alternative communication (AAC) devices can offer vital communication support for those with complex communication needs (CCNs). Unfortunately, these devices are rarely adopted. Abandonment has been linked to many factors - commonly, stigma resulting from the visibility of the device and its intrusion into other essential modes of communication like body language. However, visible AAC is strategically useful for setting conversational expectations. In this work, we explore how we might envision AAC to address these tensions directly. We conduct user-centred design activities to build three high-fidelity AAC prototypes with different communities with CCNs, specialists and stakeholders. The prototypes demonstrate different form factors, visibility and modes of input/output. Subsequently, we conduct two qualitative focus groups using convergent and divergent co-design methods with people with the language impairment aphasia - supporting ideation of seven discreet and wearable low-fidelity AAC prototypes and critique of the three high-fidelity prototypes.
Captions help deaf and hard-of-hearing (DHH) individuals visually communicate voice information to better understand video content. In speech, the literal content and paralinguistic cues (e.g., pitch and nuance) work together to create real intention. However, current captions are limited in their capacity to deliver fine nuances because they cannot fully convey these paralinguistic cues. This paper proposes an audio-visualized caption system that automatically visualizes paralinguistic cues into various caption elements (thickness, height, font type and motion). A comparative study with 20 DHH participants demonstrates how our system supports DHH individuals to be better accessible to paralinguistic cues while watching videos. Particularly in the case of formal talks, they could accurately identify the speaker’s nuance more often compared to current captions, without any practice or training. Addressing some issues on legibility and familiarity, the proposed caption system has potentials to enrich DHH individuals’ video watching experience more as hearing people enjoy.
Information access is one of the most significant challenges faced by d/Deaf signers due to a lack of sign language information. As machine-driven solutions face challenges, we seek to understand how d/Deaf communities can create, share, and support the growth of sign language content. Based on interviews with 12 d/Deaf people in China, we found that d/Deaf videos, i.e., sign language videos created by and for d/Deaf people, can be crucial information sources and educational materials. Combining a content analysis of 360 d/Deaf videos, we reveal how d/Deaf communities co-create information accessibility through collaboration in online content creation. We uncover two major challenges that creators encounter, i.e., difficulties in translation and inconsistent content qualities. We propose potential opportunities and future research directions to support d/Deaf people's needs for sign language information through collaboration within d/Deaf communities.