Designing for Sensory Access

会議の名前
CHI 2026
GeoVisA11y: An AI-based Geovisualization Question-Answering System for Screen-Reader Users
要旨

Geovisualizations are powerful tools for communicating spatial information, but are inaccessible to screen-reader users. To address this limitation, we present GeoVisA11y, an LLM-based question-answering system that makes geovisualizations accessible through natural language interaction. The system supports map reading, analysis, interpretation and navigation by handling analytical, geospatial, visual and contextual queries. Through user studies with six screen-reader users and six sighted participants, we demonstrate that GeoVisA11y effectively bridges accessibility gaps while revealing distinct interaction patterns between user groups. We contribute: (1) an open-source, accessible geovisualization system, (2) empirical findings on query and navigation differences, and (3) a dataset of geospatial queries to inform future research on accessible data visualization.

受賞
Best Paper
著者
Chu Li
University of Washington, Seattle, Washington, United States
Rock Yuren. Pang
University of Washington, Seattle, Washington, United States
Arnavi Chheda-Kothary
University of Washington, Seattle, Washington, United States
Ather Sharif
University of Washington, Seattle, Washington, United States
Henok Assalif
University of Washington, Seattle, Washington, United States
Jeffrey Heer
University of Washington, Seattle, Washington, United States
Jon E.. Froehlich
University of Washington, Seattle, Washington, United States
Understanding Nature Engagement Experiences of Blind People
要旨

Nature plays a crucial role in human health and well-being, but little is known about how blind people experience and relate to it. We conducted a survey of nature relatedness with blind (N=20) and sighted (N=20) participants, along with in-depth interviews with 16 blind participants, to examine how blind people engage with nature and the factors shaping this engagement. Our survey results revealed lower levels of nature relatedness among blind participants compared to sighted peers. Our interview study further highlighted: 1) current practices and challenges of nature engagement, 2) attitudes and values that shape engagement, and 3) expectations for assistive technologies that support safe and meaningful engagement. We also provide design implications to guide future technologies that support nature engagement for blind people. Overall, our findings illustrate how blind people experience nature beyond vision and lay a foundation for technologies that support inclusive nature engagement.

著者
Mengjie Tang
Southeast University, Nanjing, China
Xinman Li
Southeast University, Nanjing, China
Juxiao Zhang
Nanjing Normal University of Special Education, Nanjing, China
Franklin Mingzhe Li
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Zhuying Li
Southeast University, Nanjing, China
SoundWeAR: Co-Designing AR Sound Cues to Support Outdoor Awareness for DHH Individuals
要旨

For Deaf and Hard of Hearing (DHH) individuals, limited access to sound cues in outdoor environments can reduce situational awareness, making it challenging to notice events and respond to potential dangers. To address this, we investigated the needs for sound awareness and preferences of DHH individuals for visualizing environmental sounds using AR glasses. We conducted four participatory design workshops with DHH participants, social workers, and designers to explore sound awareness needs and co-design ideal visual representations. Based on our insights, we conducted interviews with 15 DHH participants to select their preferred visualizations. The most voted designs were implemented in prototype, which eight DHH participants evaluated in outdoor environment. Results demonstrate that visualizing sound cues through AR can enhance situational awareness and increase sense of safety and confidence among DHH individuals while walking outdoors. Our findings provide design suggestions for translating auditory information into accessible visual representations for DHH users.

著者
Anna Surovkova
Southern University of Science and Technology, Shenzhen, China
Tianze Xie
Southern University of Science and Technology, Shenzhen, China
Xinan Yang
Southern University of Science and Technology, Shenzhen, China
Seungwoo Je
Southern University of Science and Technology, Shenzhen, China
動画
Understanding the Feasibility of Auditory Hand-Steering Guidance for Blind and Low-Vision People
要旨

Everyday tasks like hand-washing and tea-making require people to steer their hands to use tools, navigating their hands to reach targets while avoiding hazards. Hand-steering becomes challenging when one cannot visually recognize if their hand is approaching the target and is away from hazards. Currently, no practical technological solutions support blind and low-vision (BLV) individuals' hand-steering. We designed and developed two auditory hand steering guidance methods: VERBAL and Follow-Your-Finger (FYF). VERBAL uses spoken directional instructions, while FYF uses sonification to guide hand-steering. We conducted a user study with 12 BLV participants to evaluate the feasibility of the methods in supporting hand-steering. VERBAL lacked precision, 24.6% error rate for one of the easiest conditions, but FYF showed promise, achieving 4.17% error rate for the same condition. Among the six participants who preferred FYF, the error rate was 1.39%. The results demonstrate the feasibility of auditory hand steering guidance for BLV individuals.

著者
Yuki Abe
Hokkaido University, Sapporo, Japan
Rose Xin. Lin
Singapore Management University, Singapore, Singapore
Kotaro Hara
Singapore Management University, Singapore, Singapore
Daisuke Sakamoto
Hokkaido University, Sapporo, Japan
動画
µCap: Instrumental Music Captions for Deaf and Hard-of-Hearing Individuals
要旨

Instrumental music conveys rich affective experiences through acoustic cues, yet instrumental passages often remain inaccessible to Deaf and Hard-of-Hearing (DHH) audiences. Although captioning practices for vocal songs have expanded, instrumental music remains largely uncaptioned, with no established criteria for representing musical content in text. We propose 𝜇Cap (Music Captions), an automatic instrumental music captioning system that transforms instrumental audio into time-aligned, non-lexical textual renderings enhanced with simple visuals. Drawing on Preliminary surveys with DHH individuals and expert group discussions, we developed a phonetic-like captioning schema grounded in music sound analysis and linguistics. We then implemented 𝜇Cap using audio feature extraction and a Retrieval-Augmented Generation(RAG) pipeline to produce expressive, sound-mimetic captions. Two user evaluations with DHH participants (n=20 and n=15) showed that 𝜇Cap enhanced music appreciation, immersion, and perceived presence of acoustic detail. This work contributes empirical evidence and insights for designing caption-based visual representations that make instrumental music more accessible.

受賞
Best Paper
著者
SooYeon Ahn
Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of
In-Chang Baek
Gwnagju Institution of Science and Technology, Gwangju, Korea, Republic of
KyungJoong Kim
GIST, GwangJu, Korea, Republic of
Khai N.. Truong
University of Toronto, Toronto, Ontario, Canada
Jin-Hyuk Hong
Gwangju Institute of Science and Technology, Gwangju, Korea, Republic of
ASL Educators’ Perspectives on AI for Enhancing Student Learning in American Sign Language Education
要旨

Interest in learning American Sign Language (ASL) is growing across higher education institutions in North America, as reflected in rising enrollments. Yet this growth is constrained by limited program availability and few opportunities to practice outside the classroom. AI-based technologies show promise for supporting ASL learning, but educators – who bring essential pedagogical, linguistic, and cultural expertise – have been largely absent from conversations on the design of these tools, with prior work focusing primarily on learners. To address this, we conducted formative interviews with eleven Deaf and one hearing ASL instructor, followed by two focus groups with six Deaf educators, to examine how AI tools could support ASL education. Findings revealed priorities for technology design and considerations for integration into existing pedagogical practices, with attention to curricular, linguistic, and access factors. We offer insights for designing and researching technologies aimed at (1) providing adaptive, structured feedback on signing performance and (2) supporting immersive conversational practice with virtual signing partners.

著者
Saad Hassan
Tulane University, New Orleans, Louisiana, United States
Laleh Nourian
Rochester Institute of Technology, Rochester, New York, United States
Caluã de Lacerda Pataca
Birmingham City University, Birmingham, United Kingdom
Michelle M. Olson
Rochester Institute of Technology, Rochester, New York, United States
Toni D'aurio
Rochester Institute of Technology, Rochester, New York, United States
Kanupriya Agarwal
Tulane University, New Orleans, Louisiana, United States
Syeda Mah Noor Asad
Tulane University, New Orleans, Louisiana, United States
Garreth W.. Tigwell
Rochester Institute of Technology, Rochester, New York, United States
Matt Huenerfauth
Rochester Institute of Technology, Rochester, New York, United States
Social Play Between Deaf and Hard of Hearing Children and Hearing Peers: Learning from Children and School Ecosystems
要旨

Social play is an essential pathway for emotional, cognitive, and social development in children. However, Deaf and Hard of Hearing (DHH) children often experience barriers to social play, namely in mixed-hearing ability environments (e.g., school playground). In this paper, we conducted interviews with six educators and 19 children with and without hearing loss at a Partially Bilingual School, to better understand their experiences during social play. Moreover, we observed a school playground with 46 children over seven weeks at a Full Bilingual School. Findings show that social play between DHH and hearing children is influenced by school culture, peer culture, and child agency. Importantly, some of these barriers can be (partially) overcome through a supportive bilingual and bicultural environment. We propose the concept of contextualized social play technology, which defines a design space aimed at fostering peer culture and individual agency through contextualization within schools. We also provide design insights to inform the development of future inclusive play technologies.

著者
Jing Zhao
University of Lisbon, Lisbon, Portugal
Isabel Neto
Universidade de Lisboa, Lisbon, Portugal
Michaela Okosi
Gallaudet University , Washington, District of Columbia, United States
Paulo Vaz de Carvalho
Institute of Health Sciences, Portuguese Catholic University, Lisbon, Portugal
Hugo Nicolau
University of Lisbon, Lisbon, Portugal