Home » Technology » WWDC 2025: Can Apple finally fix Voice Control?

Share This Post

Technology

WWDC 2025: Can Apple finally fix Voice Control?

Disabled people rely on Voice Control—but it’s overdue for an upgrade

Promotional graphic for WWDC 2025 featuring a translucent Apple logo above a large microphone icon. Below, colourful text reads “WWDC25 Sleek peek” with no punctuation.

As WWDC 2025 approaches, rumours swirl about what Apple may unveil: enhanced AI features, a new naming convention for all the operating systems, and a visual glass-like overhaul, including round Home Screen icons. There’s also speculation about improvements to Apple Intelligence—a brand that, despite the hype, hasn’t exactly set the world on fire.

But for disabled people like me, all eyes are on something less flashy but far more consequential: Voice Control.

Voice Control today: clever, capable—and still flawed

Apple’s Voice Control app, first introduced six ago in macOS Catalina, allows hands-free control of a Mac, iPhone, or iPad. It was a game-changer for many disabled people, particularly those with motor impairments who rely on dictation and voice navigation.

But its limitations are now well-documented—and growing harder to excuse in 2025.

Voice Control still doesn’t adapt or learn from recognition errors. If it mishears your phrase once, it will likely mishear it again tomorrow. For users with atypical or impaired speech, this leads to a repetitive and frustrating experience that chips away at productivity and independence. In contrast, Nuance Dragon’s dictation engine can learn, correct, and gradually improve.

Clipboard support is also limited. You can’t paste rich text or styled content; everything arrives as plain Helvetica 9pt. For disabled people who run businesses or write professionally, this makes everyday tasks feel amateurish.

New Apple research: A promising step for atypical speech

Just days ahead of WWDC, Apple published research into how AI could better understand and process atypical speech patterns—those affected by conditions such as ALS, Parkinson’s, or cerebral palsy.

As reported by 9to5Mac, the company’s engineers have trained models using a set of “voice quality dimensions” like breathiness, pitch monotony, and intelligibility. The goal? To teach Apple’s AI systems how to recognise voices with non-standard articulation more accurately.

This research is incredibly encouraging. Back in September 2024, I wrote for The Register calling on Apple to do exactly this: use its AI expertise to improve Voice Control dictation for people with atypical speech. Seeing signs of progress now is heartening—but as ever, the key will be practical implementation.

We’ve seen too many exciting white papers that never leave the lab. What matters now is integration—on-device, in daily use, across macOS and iOS.

What Voice Control really needs in 2025

If Apple is serious about accessibility—and the promise of “Apple Intelligence”—it must deliver:

  • Learning and memory: a Voice Control that can be corrected, and that remembers those corrections
  • Customisable commands: not just for accessibility, but for productivity and personalisation
  • Rich text support: formatting that respects the professional needs of disabled users
  • Integration of AI that works with atypical speech: not just as a research demo, but in live software

Final thoughts

Apple deserves credit for the introduction of Voice Control. But six years on, it hasn’t grown with the needs of the disabled community it serves. As AI becomes more central to Apple’s product line, it’s vital that accessibility isn’t treated as an afterthought—but as a test case for what intelligent, adaptive systems can achieve.

WWDC 2025 is Apple’s chance to prove it.

This piece is part of our three-part WWDC 2025 accessibility series. You can also read our full wishlist for what disabled people want to see at WWDC.

Colin Hughes is a former BBC producer who campaigns for greater access and affordability of technology for disabled people

Leave a Reply