Say It All: Feedback for Improving Non-Visual Presentation Accessibility

JiWoong Jang

Carnegie Mellon University | Human-Computer Interaction Institute


A diagram of Presentation A11y's workflow. A presenter's slides enter Preprocessing, where slide elements like titles, images, body text are analyzed and labelled by the system. The system also calculates relative probabilities that each element will be spoken aloud. When the presenter is giving the talk, Presentation A11y provides Real-Time Feedback, such that spoken words are analyzed to help predict upcoming elements by updating the probability map. Slide elements determined to have been referenced will then be highlighted in real time. After the presentation Presentation A11y provides Post-Presentation Feedback, where slides are shown with slide elements highlighted that the system determines has been referenced in the talk. A transcript of the talk is shown alongside the slides, with the text segmented and displayed next to the slide on which it was determined to have been spoken. Each slide receives a percentage-based score indicating how well slide elements on each slide received verbal coverage. In this figure, 3 percentage scores are presented in the post-presentation feedback interface for each of three slides, which respectively are 60%, 32 % and 100 %. The users can hover their mouse on non-fully described element descriptions to highlight the corresponding elements on slide.

Abstract

Presenters commonly use slides as visual aids for informative talks. When presenters fail to verbally describe the content on their slides, blind and visually impaired audience members lose access to necessary content, making the presentation difficult to follow. Our analysis of 90 presentation videos revealed that 72% of 610 visual elements (e.g., images, text) were insufficiently described. To help presenters create accessible presentations, we introduce Presentation A11y, a system that provides real-time and post-presentation accessibility feedback. Our system analyzes visual elements on the slide and the transcript of the verbal presentation to provide element-level feedback on what visual content needs to be further described or even removed. Presenters using our system with their own slide-based presentations described more of the content on their slides, and identified 3.26 times more accessibility problems to fix after the talk than when using a traditional slide-based presentation interface. Integrating accessibility feedback into content creation tools will improve the accessibility of informational content for all.

Links

[HTML Paper] [PDF] [ACM DL] [Presentation] [Download Extension (To appear at Google Chrome Web Store)]

Video

BibTex

@inproceedings{peng2021say,
  title={Say It All: Feedback for Improving Non-Visual Presentation Accessibility},
  author={Peng, Yi-Hao and Jang, JiWoong and Bigham, Jeffrey P and Pavel, Amy},
  booktitle={Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems},
  pages={1--12},
  year={2021}
}