Yesterday’s WWDC 2025 keynote brought Apple’s bold new Liquid Glass design, system-wide renaming (e.g., iOS 26, macOS Tahoe 26), and a big push for on-device Apple Intelligence—available now in eight more languages, and open to developers everywhere .
We also saw major updates to iPad multitasking, Spotlight, Xcode 26 with foundation LLM support, and even lighter Game controls thanks to Liquid Glass‘ fresh UI .
✅ Notable accessibility updates
Announced via press release last month, Apple quietly introduced a number of genuinely welcome updates available now to try in the betas released after the keynote:
- Live Captions on Apple Watch: real‑time captioning during calls via Live Listen mic, controlled remotely from the Watch .
- Voice Control improvements: developers can now use Voice Control in Xcode to dictate Swift code; new programming mode and vocabulary syncing across devices were introduced .
- System-wide Accessibility enhancements: features like Accessibility Nutrition Labels in App Store; Magnifier for Mac; Eye Tracking support; Braille Access; Accessibility Reader; updated Personal Voice; Switch Control via BCIs; Assistive‑Access interface for Apple TV; and more .
- Vision improvements in visionOS/Apple Watch: better Zoom on Vision Pro, live object recognition, and tutorials via Support playlist videos .
These updates represent meaningful progress across Apple’s platforms, particularly Watch, Mac, and visionOS, while embracing accessibility in areas many of us use every day.
⚠️ What Apple still didn’t deliver
That said, significant gaps remain:
- Voice Control still lacks machine learning-based error learning. Despite the earlier announcement about AI enhancements for atypical speech, there was no mention of integrating that into everyday Voice Control for adaptive corrections.
- No AI-based noise isolation for studio mic dictation with Voice Control, though studio-quality mic processing in AirPods Pro for calls and message dictation was amnouncec . Expanding that to built-in microphones on iPhone and Mac, and the Voice Control app, would benefit users who dictate in noisy environments at home or in work.
- Apple Watch voice-based control remains unfinished. Siri activation still depends on gestures or wrist motions, and dismissing alerts or managing apps by voice alone remains largely inaccessible.
- Siri overhaul still missing. There was minimal mention of conversational Siri. Apple Intelligence, while expanded, didn’t appear to address voice-first scenarios for disabled users—relegating Siri to the sidelines, not the centre stage.
- No mention of formatted dictation. Voice Control still doesn’t support rich-text clipboard actions—copy/paste with formatting is still nowhere to be found.
🔻 Final thoughts
WWDC 2025 offered an impressive suite of accessibility tools—from Watch captions to Mac programming voice control. But they still fall short of enabling the kind of hands-free, adaptive, inclusive use that disabled people depend on.
If Apple truly stands by “accessibility is part of our DNA” , the next step must be building intelligent, voice-first experiences—not just deploying AI behind the scenes.
👉 What’s your take?
What additions would bring real change in your Apple experience? Tools like formatted dictation, AI noise isolation—or perhaps a Siri that really listens? Let’s talk in the comments.