For people who are blind or visually impaired, JAWS is synonymous with the freedom to run Windows PCs with a remarkable level of control and precision in speech and braille output. The keyboard-controlled application enables navigation in GUI-based interfaces of websites and Windows programs. Anyone who has ever listened to someone familiar with JAWS (the acronym for “Job Access With Speech”) is navigating a PC and marveling at the speed of the operator and the quick voice responses from JAWS itself.
For almost 25 years JAWS has dominated the screen reader field and is used by hundreds of thousands of people worldwide. It is arguably one of the greatest achievements in modern assistive technology. We are pleased to announce that Glen Gordon, JAWS architect for over 25 years, is on the agenda Sight Tech GlobalThis is a virtual event (December 2-3) that will look at how AI-related technologies will impact assistive technology and accessibility in the years to come. Participation is free and Registration is open.
Gordon’s interest in accessibility was blind from birth, stemming from what he called “a selfish desire to use Windows at a time when it was not at all clear that graphical user interfaces could be made accessible”. He holds an MBA from UCLA Anderson School and learned software development through “the school of hard knocks and a lot of frustration trying to use inaccessible software”. He is an audio and broadcast fan and presenter of FSCast, the podcast from Freedom Scientific.
The latest public beta version of JAWS offers a glimpse into the future of the legendary software: It now works with certain user voice commands – “Voice Assist” – and thanks to the AI technologies of the JAWS team offers optimized access to image descriptions during Freedom Scientific used in both JAWS and FUSION (which combines JAWS and ZoomText, a screen magnifier). These updates address two of JAWS ‘challenges: the complexity of the available keyboard shortcut set, which intimidates some users, and “alt tags” for images that do not always adequately describe the image.
“The upcoming versions of JAWS, ZoomText, and Fusion will use natural language processing so that many screen-reading commands can be performed verbally,” says Gordon. “You probably don’t want to say every command, but for the less common, Voice Assist offers a way to minimize the number of keyboard shortcuts that you have to learn.”
“By and large, we want to make it easier for people to use a smaller set of instructions to work efficiently. This essentially means making our products smarter and being able to predict what a user wants and needs based on their previous actions. The way to get there is imprecise and we will continue to rely on user feedback to figure out what works best. “
The next generation of screen readers will use AI, among other things, and that will be an important topic for us Sight Tech Global on December 2nd and 3rd. Get your free ticket now.
Sight Tech Global welcomes sponsors. Current sponsors are Verizon Media, Google, Waymo, Mojo Vision and Wells Fargo. The event is organized by volunteers and all proceeds from the event will benefit The Vista Center for the blind and visually impaired in Silicon Valley.
Pictured above: JAWS Architect Glen Gordon in his home audio studio.