GeoVisA11y: An AI-based Geovisualization Question-Answering System for Screen-Reader Users
説明

Geovisualizations are powerful tools for communicating spatial information, but are inaccessible to screen-reader users.

To address this limitation, we present GeoVisA11y, an LLM-based question-answering system that makes geovisualizations accessible through natural language interaction.

The system supports map reading, analysis, interpretation and navigation by handling analytical, geospatial, visual and contextual queries.

Through user studies with six screen-reader users and six sighted participants, we demonstrate that GeoVisA11y effectively bridges accessibility gaps while revealing distinct interaction patterns between user groups. We contribute: (1) an open-source, accessible geovisualization system, (2) empirical findings on query and navigation differences, and (3) a dataset of geospatial queries to inform future research on accessible data visualization.

日本語まとめ
読み込み中…
読み込み中…
Understanding Nature Engagement Experiences of Blind People
説明

Nature plays a crucial role in human health and well-being, but little is known about how blind people experience and relate to it. We conducted a survey of nature relatedness with blind (N=20) and sighted (N=20) participants, along with in-depth interviews with 16 blind participants, to examine how blind people engage with nature and the factors shaping this engagement. Our survey results revealed lower levels of nature relatedness among blind participants compared to sighted peers. Our interview study further highlighted: 1) current practices and challenges of nature engagement, 2) attitudes and values that shape engagement, and 3) expectations for assistive technologies that support safe and meaningful engagement. We also provide design implications to guide future technologies that support nature engagement for blind people. Overall, our findings illustrate how blind people experience nature beyond vision and lay a foundation for technologies that support inclusive nature engagement.

日本語まとめ
読み込み中…
読み込み中…
SoundWeAR: Co-Designing AR Sound Cues to Support Outdoor Awareness for DHH Individuals
説明

For Deaf and Hard of Hearing (DHH) individuals, limited access to sound cues in outdoor environments can reduce situational awareness, making it challenging to notice events and respond to potential dangers. To address this, we investigated the needs for sound awareness and preferences of DHH individuals for visualizing environmental sounds using AR glasses. We conducted four participatory design workshops with DHH participants, social workers, and designers to explore sound awareness needs and co-design ideal visual representations. Based on our insights, we conducted interviews with 15 DHH participants to select their preferred visualizations. The most voted designs were implemented in prototype, which eight DHH participants evaluated in outdoor environment. Results demonstrate that visualizing sound cues through AR can enhance situational awareness and increase sense of safety and confidence among DHH individuals while walking outdoors. Our findings provide design suggestions for translating auditory information into accessible visual representations for DHH users.

日本語まとめ
読み込み中…
読み込み中…
Understanding the Feasibility of Auditory Hand-Steering Guidance for Blind and Low-Vision People
説明

Everyday tasks like hand-washing and tea-making require people to steer their hands to use tools, navigating their hands to reach targets while avoiding hazards. Hand-steering becomes challenging when one cannot visually recognize if their hand is approaching the target and is away from hazards. Currently, no practical technological solutions support blind and low-vision (BLV) individuals' hand-steering. We designed and developed two auditory hand steering guidance methods: VERBAL and Follow-Your-Finger (FYF). VERBAL uses spoken directional instructions, while FYF uses sonification to guide hand-steering. We conducted a user study with 12 BLV participants to evaluate the feasibility of the methods in supporting hand-steering. VERBAL lacked precision, 24.6% error rate for one of the easiest conditions, but FYF showed promise, achieving 4.17% error rate for the same condition. Among the six participants who preferred FYF, the error rate was 1.39%. The results demonstrate the feasibility of auditory hand steering guidance for BLV individuals.

日本語まとめ
読み込み中…
読み込み中…
µCap: Instrumental Music Captions for Deaf and Hard-of-Hearing Individuals
説明

Instrumental music conveys rich affective experiences through acoustic cues, yet instrumental passages often remain inaccessible to Deaf and Hard-of-Hearing (DHH) audiences. Although captioning practices for vocal songs have expanded, instrumental music remains largely uncaptioned, with no established criteria for representing musical content in text. We propose 𝜇Cap (Music Captions), an automatic instrumental music captioning system that transforms instrumental audio into time-aligned, non-lexical textual renderings enhanced with simple visuals. Drawing on Preliminary surveys with DHH individuals and expert group discussions, we developed a phonetic-like captioning schema grounded in music sound analysis and linguistics. We then implemented 𝜇Cap using audio feature extraction and a Retrieval-Augmented Generation(RAG) pipeline to produce expressive, sound-mimetic captions. Two user evaluations with DHH participants (n=20 and n=15) showed that 𝜇Cap enhanced music appreciation, immersion, and perceived presence of acoustic detail. This work contributes empirical evidence and insights for designing caption-based visual representations that make instrumental music more accessible.

日本語まとめ
読み込み中…
読み込み中…
ASL Educators’ Perspectives on AI for Enhancing Student Learning in American Sign Language Education
説明

Interest in learning American Sign Language (ASL) is growing across higher education institutions in North America, as reflected in rising enrollments. Yet this growth is constrained by limited program availability and few opportunities to practice outside the classroom. AI-based technologies show promise for supporting ASL learning, but educators – who bring essential pedagogical, linguistic, and cultural expertise – have been largely absent from conversations on the design of these tools, with prior work focusing primarily on learners. To address this, we conducted formative interviews with eleven Deaf and one hearing ASL instructor, followed by two focus groups with six Deaf educators, to examine how AI tools could support ASL education. Findings revealed priorities for technology design and considerations for integration into existing pedagogical practices, with attention to curricular, linguistic, and access factors. We offer insights for designing and researching technologies aimed at (1) providing adaptive, structured feedback on signing performance and (2) supporting immersive conversational practice with virtual signing partners.

日本語まとめ
読み込み中…
読み込み中…
Social Play Between Deaf and Hard of Hearing Children and Hearing Peers: Learning from Children and School Ecosystems
説明

Social play is an essential pathway for emotional, cognitive, and social development in children. However, Deaf and Hard of Hearing (DHH) children often experience barriers to social play, namely in mixed-hearing ability environments (e.g., school playground). In this paper, we conducted interviews with six educators and 19 children with and without hearing loss at a Partially Bilingual School, to better understand their experiences during social play. Moreover, we observed a school playground with 46 children over seven weeks at a Full Bilingual School. Findings show that social play between DHH and hearing children is influenced by school culture, peer culture, and child agency. Importantly, some of these barriers can be (partially) overcome through a supportive bilingual and bicultural environment. We propose the concept of contextualized social play technology, which defines a design space aimed at fostering peer culture and individual agency through contextualization within schools. We also provide design insights to inform the development of future inclusive play technologies.

日本語まとめ
読み込み中…
読み込み中…