Carnegie Mellon University | Human-Computer Interaction Institute
Presenters commonly use slides as visual aids for informative talks. When presenters fail to verbally describe the content on their slides, blind and visually impaired audience members lose access to necessary content, making the presentation difficult to follow. Our analysis of 90 presentation videos revealed that 72% of 610 visual elements (e.g., images, text) were insufficiently described. To help presenters create accessible presentations, we introduce Presentation A11y, a system that provides real-time and post-presentation accessibility feedback. Our system analyzes visual elements on the slide and the transcript of the verbal presentation to provide element-level feedback on what visual content needs to be further described or even removed. Presenters using our system with their own slide-based presentations described more of the content on their slides, and identified 3.26 times more accessibility problems to fix after the talk than when using a traditional slide-based presentation interface. Integrating accessibility feedback into content creation tools will improve the accessibility of informational content for all.
@inproceedings{peng2021say, title={Say It All: Feedback for Improving Non-Visual Presentation Accessibility}, author={Peng, Yi-Hao and Jang, JiWoong and Bigham, Jeffrey P and Pavel, Amy}, booktitle={Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems}, pages={1--12}, year={2021} }