*This is my submission to the PSU Open Innovation Challenge. In the event it’s not accepted, I don’t want this text lost because of the implications for accessibility on the internet and our ability to transform it.
Accessibility; it’s not just a concept, it’s people. It’s people we make accommodations for; people we do testing for, and people we dedicate resources to in order to ensure they have their unique needs addressed in their educational journey. On campus, we make ramps, brail signage, bumped plates at crossings and other physical things that anyone can see and use, including those who rely on them.
Unfortunately, digital realms structure environments with asterisks. We accept that a website is optimized for accessibility based on highly specific browser versions and downloaded toolsets on the part of those in need. The Universal Design for Learning (UDL) framework, suggests environments should be universally accessible to all without need for additional accommodation.
What if instead of requiring those in need of assistive technologies to have them, we included them natively in all of our experiences?
What if users didn’t need JAWS or Dragon Naturally Speaking and we provided those capabilities natively in our systems?
By leveraging specifications from the W3C found in the Web Speech API, we can have the browser talk to us. The specification also can be used to listen to and process our voice in real time. Using the two together we can make conversational systems with no plugins required that also work on mobile phones!
I currently have this working in our courses but want to utilize the expertise of TLT to produce a more generalized version that could be applied to all websites at the university with little effort! You can see an example of this working in our ELMSLN Learning environment below