Say It All: Feedback for Improving Non-Visual Presentation Accessibility

要旨

Presenters commonly use slides as visual aids for informative talks.When presenters fail to verbally describe the content on their slides,blind and visually impaired audience members lose access to necessary content, making the presentation difficult to follow. Our analysis of 90 existing presentation videos revealed that 72% of 610 visual elements (e.g., images, text) were insufficiently described. To help presenters create accessible presentations, we introduce Presentation A11y, a system that provides real-time and post-presentation accessibility feedback. Our system analyzes visual elements on the slide and the transcript of the verbal presentation to provide element-level feedback on what visual content needs to be further described or even removed. Presenters using our system with their own slide-based presentations described more of the content on their slides, and identified 3.26 times more accessibility problems to fix after the talk than when using a traditional slide-based presentation interface. Integrating accessibility feedback into content creation tools will improve the accessibility of informational content for all.

著者
Yi-Hao Peng
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
JiWoong Jang
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Jeffrey P. Bigham
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Amy Pavel
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
DOI

10.1145/3411764.3445572

論文URL

https://doi.org/10.1145/3411764.3445572

動画

会議: CHI 2021

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2021.acm.org/)

セッション: Accessible Content Creation

[A] Paper Room 01, 2021-05-10 17:00:00~2021-05-10 19:00:00 / [B] Paper Room 01, 2021-05-11 01:00:00~2021-05-11 03:00:00 / [C] Paper Room 01, 2021-05-11 09:00:00~2021-05-11 11:00:00
Paper Room 01
11 件の発表
2021-05-10 17:00:00
2021-05-10 19:00:00
日本語まとめ
読み込み中…