この勉強会は終了しました。ご参加ありがとうございました。
Joint attention (JA) is a crucial component of social interaction, relying heavily on visual cues like eye gaze and pointing. This creates barriers for blind and visually impaired people (BVI) to engage in JA with sighted peers. Yet, little research has characterised these barriers or the strategies BVI people employ to overcome them. We interviewed ten BVI adults to understand JA experiences and analysed videos of four BVI children with eight sighted partners engaging in activities conducive to JA. Interviews revealed that lack of JA feedback is perceived as voids that block engagement, exacerbated in group settings, with an emphasis on oneself to fill those voids. Video analysis anchored the absence of the person element within typical JA triads, suggesting a potential for technology to foster alternative dynamics between BVI and sighted people. We argue these findings could inform technology design that supports more inclusive JA interactions.
Remote sighted assistance (RSA) offers prosthetic support to people with visual impairments (PVI) through image- or video-based conversations with remote sighted assistants. While useful, RSA services introduce privacy concerns, as PVI may reveal private visual content inadvertently. Solutions have emerged to address these concerns on image-based asynchronous RSA, but exploration into solutions for video-based synchronous RSA remains limited. In this study, we developed BubbleCam, a high-fidelity prototype allowing PVI to conceal objects beyond a certain distance during RSA, granting them privacy control. Through an exploratory field study with 24 participants, we found that 22 appreciated the privacy enhancements offered by BubbleCam. The users gained autonomy, reducing embarrassment by concealing private items, messy areas, or bystanders, while assistants could avoid irrelevant content. Importantly, BubbleCam maintained RSA's primary function without compromising privacy. Our study highlighted a cooperative approach to privacy preservation, transitioning the traditionally individual task of maintaining privacy into an interactive, engaging privacy-preserving experience.
Shape-changing skin is an exciting modality due to its accessible and engaging nature. Its softness and flexibility make it adaptable to different interactive devices that children with and without visual impairments can share. Although their potential as an emotionally expressive medium has been shown for sighted adults, their potential as an inclusive modality remains unexplored. This work explores the shape-emotional mappings in children with and without visual impairment. We conducted a user study with 50 children (26 with visual impairment) to investigate their emotional associations with five skin shapes and two movement conditions. Results show that shape-emotional mappings are dependent on visual abilities. Our study raises awareness of the influence of visual experiences on tactile vocabulary and emotional mapping among sighted, low-vision, and blind children. We finish discussing the causal associations between tactile stimuli and emotions and suggest inclusive design recommendations for shape-changing devices.
Mobile apps have become indispensable for accessing and participating in various environments, especially for low-vision users. Users with visual impairments can use screen readers to read the content of each screen and understand the content that needs to be operated. Screen readers need to read the hint-text attribute in the text input component to remind visually impaired users what to fill in. Unfortunately, based on our analysis of 4,501 Android apps with text inputs, over 76% of them are missing hint-text. These issues are mostly caused by developers’ lack of awareness when considering visually impaired individuals. To overcome these challenges, we developed an LLM-based hint-text generation model called HintDroid, which analyzes the GUI information of input components and uses in-context learning to generate the hint-text. To ensure the quality of hint-text generation, we further designed a feedback-based inspection mechanism to further adjust hint-text. The automated experiments demonstrate the high BLEU and a user study further confirms its usefulness. HintDroid can not only help visually impaired individuals, but also help ordinary people understand the requirements of input components. HintDroid demo video: https://youtu.be/FWgfcctRbfI.
Continued social participation is a key determinant of healthy aging and lowers the risks of isolation and loneliness. While online technologies can provide a convenient way for older adults to connect socially, some prefer connecting offline with others in their community, which can pose different challenges, especially for those with disabilities. Yet, we still know little about how older adults with visual disabilities might leverage technology to address their needs for engaging in social events in their communities. We interviewed 16 blind or visually impaired (BVI) adults 60 years or older to understand their experiences engaging in community social activities and the role of technology in the process. We describe the challenges participants faced connecting with others in their community and their use of technology to overcome them. Based on our findings, we discuss design opportunities for technology to help BVI older adults manage the hidden labor social participation.