Skip to main content

iOS engineers detail Apple’s approach to improving accessibility with iOS 14

Apple just gave an overhaul to its accessibility landing page to better highlight the native features in macOS and iOS that allow user’s devices to “work the way you do” and encourage everyone to “make something wonderful.” Now a new interview with Apple’s accessibility and AI/ML engineers goes into more detail on the company’s approach to improving accessibility with iOS 14.

iOS accessibility engineer Chris Fleizach and AI/ML team member Jeff Bigham spoke with TechCrunch about how Apple thought about evolving the accessibility features from iOS 13 to 14 and how collaboration was needed to achieve these goals.

One of the biggest improvements with iOS 14 this fall when it comes to accessibility is the new Screen Recognition feature. It goes beyond VoiceOver which now uses “on-device intelligence to recognize elements on your screen to improve VoiceOver support for app and web experiences.”

Here’s how Apple describes Screen Recognition:

Screen Recognition automatically detects interface controls to aid in navigating apps

Screen Recognition also works with “on-device intelligence to detect and identify important sounds such as alarms, and alerts you to them using notifications.”

Here’s how Apple’s Fleizach describes Apple’s approach to improving accessibility with iOS 14 and the speed and precision that comes with Screen Recognition:

“We looked for areas where we can make inroads on accessibility, like image descriptions,” said Fleizach. “In iOS 13 we labeled icons automatically – Screen Recognition takes it another step forward. We can look at the pixels on screen and identify the hierarchy of objects you can interact with, and all of this happens on device within tenths of a second.”

Bigham notes how crucial collaboration across the teams at Apple were in going beyond VoiceOver’s capabilities with Screen Recognition:

“VoiceOver has been the standard bearer for vision accessibility for so long. If you look at the steps in development for Screen Recognition, it was grounded in collaboration across teams — Accessibility throughout, our partners in data collection and annotation, AI/ML, and, of course, design. We did this to make sure that our machine learning development continued to push toward an excellent user experience,” said Bigham.

And that work was labor-intensive:

It was done by taking thousands of screenshots of popular apps and games, then manually labeling them as one of several standard UI elements. This labeled data was fed to the machine learning system, which soon became proficient at picking out those same elements on its own.

TechCrunch says don’t expect Screen Recognition to come to Mac quite yet as it would be a serious undertaking. However, with Apple’s new Macs featuring the company’s custom M1 SoC, they have a 16-core Neural Engine that would certainly be up to the task – whenever Apple decides to expand this accessibility feature.

Check out the full interview here and Apple’s new accessibility landing page. And check out a conversation on accessibility between TC’s Matthew Panzarino and Apple’s Chris Fleizach and Sarah Herrlinger.

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Michael Potuck Michael Potuck

Michael is an editor for 9to5Mac. Since joining in 2016 he has written more than 3,000 articles including breaking news, reviews, and detailed comparisons and tutorials.