Aside from the intrinsic value of inclusion for all, live captioning must be provided for all live audio content in synchronized media on the Web per WCAG AA 1.2.4, which serves as the University of Minnesota Accessibility Standard.
"Captions are provided for all live audio content in synchronized media. (Level AA)"
Since the advent of C-Print and Communication Access Realtime Translation (CART) along with the capabilities of laptops and adapters to show captions on screens, it is possible for people who are deaf or hard of hearing, among others, to have direct captioning of live events. Streaming video of the event over the open Web requires captions, so it is a way to be compliant both on-site and in web communications.
Types of Live Captioning
2 basic types of captioning exist: C-Print/Typewell and CART.
C-Print and Typewell
C-Print and Typewell provide a content based meaning-for-meaning realtime transcription (similar to an interpreter) rather than verbatim. They provide meaning in fewer words. C-Print is phonetically based and Typewell is based on spelling. Both are typically less expensive than CART.
CART provides a realtime verbatim, word-for-word transcription like court reporting.
Services for Captioning Live Events
ITSS does not have the staff to provide captioning services for live events. UMD Disability Resources (DR) coordinates live captioning with a remote service, Alternative Communication Systems (ACS) for UMD departments and programs requiring realtime captioning.
- Cost is $98 an hour for CART and $60 an hour for CPrint.
- Cost is paid for by your department or program, not by DR.
- Make requests well in advance (2 weeks minimum).
- Contact [email protected] to set up a consultation.
Captioning Live Events References
- Communication Access Realtime Translation by National Association of the Deaf
- Communication Access Real-Time Translation by Wikipedia
- Free Live Transcript - "This is an open-source exploration of creating live transcripts of speech on the web, that can be displayed (and edited) in real time on a big screen, or watched on anybody's personal device. The underlying transcription process is based on David Walsh's blog post, and with apologies for only working in Google Chrome. The project was started by Mark Noonan at Code for Atlanta, where are growing number of people are learning and making contributions. The code is available on GitHub. Currently, any browser can watch a transcript, but only Chrome can generate them, because only Chrome has implemented the experimental Web Speech Recognition API Specification, so you will need to use Chrome for this page to work…"