Apple is introducing a set of new features designed to enhance accessibility for individuals with cognitive, visual, auditory, and motor disabilities. Among the updates is a new personalized voice feature for individuals who are on the verge of losing their ability to speak. This feature enables them to create a synthetic voice that sounds like their own.
Apple states that the user must first read aloud various phrases, a process that takes about 15 minutes, to train the voice model. Afterward, they can type what they want to say on the iPhone, iPad, and Mac with the M-chip. The feature utilizes on-device machine learning to keep the user’s information secure. Initially, only English is supported.
At the same time, the company is also showcasing new apps tailored for users with cognitive impairments. These apps retain the most important functionalities while reducing cognitive load. Apps included in this version cover music, phone calls, messaging, photos, and the camera.
Another new feature is a camera detection mode that can assist users with visual impairments in interacting with physical objects featuring multiple text labels. Apple demonstrates an example where an iPhone can read aloud which button on a microwave users should press.
These features are slated for release “later this year,” likely with iOS 17 in the second half of the year.