Home » Opinion » Apple’s Voice Control has stalled — but AI dictation is racing ahead

Share This Post

Opinion

Apple’s Voice Control has stalled — but AI dictation is racing ahead

A spate of Voice Control bugs in macOS 26 shows how Apple has let voice accessibility stagnate, just as AI tools like Aqua Voice and Microsoft’s Fluid Dictation redefine what’s possible

Illustration showing a crossroads sign under the Apple logo, with one arrow labelled “Voice Control” appearing worn and faded, and the other labelled “AI Dictation” glowing and dynamic — symbolising Apple’s choice between stagnation and innovation in accessibility technology.

Last month I reported on a series of serious Voice Control bugs in macOS 26 that Apple quietly acknowledged but has yet to fix. You can read that piece here. The story struck a nerve because these weren’t minor quirks — they broke the core promise of Voice Control for people who depend on it to use their Macs hands-free.

Frankly, it’s astonishing that Apple allowed these bugs to ship. The glitches are more than technical oversights; they’re symptomatic of something deeper — a feature that has stalled. Voice Control feels like it’s been left to drift, surviving largely for the small group of disabled people who rely on it, but getting little of the attention that once made Apple a leader in voice accessibility.

Yet while Apple’s own tools languish, artificial intelligence is quietly reinventing voice input elsewhere. From Aqua Voice on the Mac to Microsoft’s new Fluid Dictation on Copilot+ PCs, AI is transforming dictation from a clunky accessibility bolt-on into something truly empowering.

Voice Control: a legacy Apple seems to have moved on from

For many users, Voice Control is invisible — few even know it exists. For those who do rely on it, its neglect is painfully obvious: limited editing, slow response times, unreliable commands, and no sign of meaningful improvement in years.

A computer-programmer friend of mine recently put it bluntly:

“Voice Control is legacy — necessary for a small cohort, but otherwise forgotten. It’s likely in maintenance mode, waiting to be replaced once Siri and Apple Intelligence mature.”

That prediction may prove true. But if Apple plans to fold Voice Control into a future AI assistant, it needs to ensure the replacement is far more capable than what we have today — and far more reliable than macOS 26’s broken implementation.

Aqua Voice shows what modern dictation can feel like

Enter Aqua Voice, a small but fast-moving startup that has rebuilt dictation from the ground up. Designed for people who live by their voice, Aqua uses AI to make speech feel fluid again — text appears almost instantly, editing feels natural, and the system recognises complex vocabulary that traditional dictation engines butcher.

It’s cross-app, too — working in Mail, Notes, Slack or Word with equal ease. And because it uses modern AI models, it understands context: commands like “delete that sentence” or “move this paragraph up” work far more reliably than Apple’s rigid syntax.

For disabled people, that difference is life-changing. It means independence, autonomy, and the ability to write and edit without fighting the technology.

Microsoft’s Fluid Dictation brings AI accessibility to Windows

Meanwhile, Microsoft is embedding similar intelligence deep into Windows 11. Its new Fluid Dictation mode — now available on Copilot+ PCs in beta channels — automatically adds punctuation, removes filler words, and refines grammar in real time. Crucially, it runs on-device, protecting privacy while delivering near-instant results. I haven’t yet had the chance to try Fluid Dictation myself, but for someone like me who relies heavily on voice input, its promise is extremely interesting.

Combined with Copilot Voice, which lets users control their PC and compose documents conversationally, Microsoft is building something Apple once promised but never quite delivered: an accessible computing environment where speech is the default, not the afterthought.

AI is rescuing dictation — and redefining accessibility

The pattern is clear. AI is rescuing dictation from the margins and re-establishing it as a core input method. Where traditional systems demanded rigid commands, these new tools interpret context, intent, and nuance.

For accessibility, this isn’t a nice-to-have — it’s a revolution. When dictation becomes natural, accurate, and privacy-preserving, it stops feeling like assistive technology and starts feeling like mainstream technology done right.

A call to Apple: don’t abandon the people who rely on you

If Apple is indeed preparing to retire Voice Control in favour of a future “agentic” Siri, it must first ensure that Siri inherits all of Voice Control’s fine-grained navigation and editing powers — and then some. Accessibility shouldn’t regress in the name of progress.

Apple could learn much from Aqua Voice and Microsoft:

• Make dictation context-aware and conversational.
• Keep processing local to preserve privacy.
• Co-design with disabled people.
• Treat voice as a first-class way to interact with a Mac — not a niche add-on.

Many argue that Apple is still seen as a leader in accessibility, but the persistent problems with Voice Control suggest that leadership is slipping. Only by addressing them can Apple reclaim its position in voice accessibility technology.

Conclusion

The bugs I uncovered in macOS 26 were more than software errors; they were warning signs of a feature being quietly abandoned. While Apple’s Voice Control drifts, others are moving fast.

AI dictation is already here — faster, smarter, and more human than ever. The question is whether Apple will learn from that momentum or be left behind by it.

Colin Hughes is a former BBC producer who campaigns for greater access and affordability of technology for disabled people

Leave a Reply