Apple appears to be moving closer to its first smart glasses. Recent reporting suggests the company is working on an AI-led wearable, with Bloomberg reporting several frame styles and camera designs, and Reuters previously reporting that Apple planned smart glasses as part of a broader push into AI devices.
I wear Ray-Ban Meta smart glasses every day, a display-free product that has already proved how useful this category can be. They are not a novelty for me, or a lifestyle gadget I occasionally test for a review. They have become part of how I access technology.
I have written before about how Ray-Ban Meta smart glasses can be hands-free heaven for accessibility, and that experience remains the starting point for how I judge Apple’s rumoured glasses.
Ray-Ban Meta glasses are not perfect. They still have gaps, limits and frustrations. But their deeply ingrained hands-free voice control has transformed parts of my daily life. That is the lesson Apple should take most seriously.
The real promise of smart glasses is not simply putting a display in front of the eye. It is placing a useful computer on the face, available by voice, without needing to reach for a phone, hold a device, tap a screen or point a camera by hand.
For many people, that is convenience. For some disabled people, it can be the difference between needing help and acting independently.
Ray-Ban Meta has already proved the point
Ray-Ban Meta glasses have shown that display-free smart glasses can be useful now.
They can make calls, send texts, control features and answer questions with a few words, without needing to unlock a phone or press and hold for assistance. The product is also marketed around hands-free communication, including calls and messages through voice commands.
That sounds like a standard product feature until you live with it.
The glasses are already on my face. The speakers are near my ears. The microphones are listening for a wake phrase. When the system works, the interaction starts from where I am, not from a phone lying somewhere out of reach.
That has changed my expectations.
A phone still assumes a lot. It assumes you can pick it up, hold it, point it, unlock it, frame the subject, tap the right area of the screen and correct mistakes. For me, those physical steps can be the barrier.
Ray-Ban Meta glasses remove some of that friction. I can ask a question, hear a message, dictate a reply, make a call or capture something without first negotiating the phone. That is why I see them less as a camera gadget and more as an early form of ambient, hands-free computing.
This is where the mainstream and accessibility stories meet. The same feature that makes smart glasses convenient for one person can make them liberating for another.
The gaps Apple needs to close
Meta deserves real credit for making smart glasses feel normal. The partnership with Ray-Ban gives the product familiar eyewear credibility, and the voice-first experience is often impressively natural.
But the gaps are also clear.
Some features are region-limited. Some arrive in the US well before the UK. The assistant can be inconsistent. The smart home story is non existent. There are still moments when the glasses push you back towards the phone, the app or a physical gesture.
For many users, that may be a minor inconvenience. For someone with limited hand movement, it can be the point at which the product stops being accessible.
This is the area where Apple has to go beyond Meta. Not by adding more gimmicks, but by removing more physical assumptions.
If Apple smart glasses arrive, the basics should be fully voice-first. Calls, messages, volume, notifications, media, camera capture, settings, error recovery and reconnection should not depend on touching the frame or opening the iPhone.
There should always be a non-touch route back.
That may sound demanding, but it is the only way to make the product genuinely dependable for people who cannot simply tap, swipe, reset, unfold, refold or troubleshoot with their hands.
Fit also matters more than the technology industry sometimes admits. Smart glasses are not like a phone that can be placed on a desk or a watch that can be tightened once and forgotten. They sit on the face all day. If they slip down the nose, press uncomfortably, sit badly with prescription lenses or need constant adjustment, they quickly become inaccessible for people who cannot easily reposition them. Apple will need to treat frame fit, weight, nose pads, prescription support and adjustability as accessibility issues, not cosmetic details.
Why iOS 27 Siri could change the picture
Apple’s rumoured smart glasses will only be as capable as the assistant behind them. That is why the expected Siri upgrade in iOS 27 matters.
The foundations for a more capable Siri are now more than rumour. Apple and Google have entered a multi-year collaboration under which the next generation of Apple Foundation Models will be based on Google’s Gemini models and cloud technology, helping to power future Apple Intelligence features, including a more personalised Siri.
Reuters has also reported that Apple is testing a Siri feature for iOS 27 that would allow the assistant to handle multiple commands in a single query. If Apple brings that level of conversational and multi-step intelligence to smart glasses, the product becomes much more interesting.
That kind of upgrade could be significant on an iPhone. On glasses, it could be transformative.
The current Siri is too limited to carry a serious wearable AI product. It can be useful, but it is not yet the conversational, context-aware, action-taking assistant that smart glasses need. A wearable device needs an assistant that can understand natural speech, remember context, control apps, complete multi-step tasks and recover gracefully when something goes wrong.
Imagine saying through Apple glasses:
“Message Adam that I’m running ten minutes late, remind me to call him when I get home, and turn on the hall light.”
That should not require three separate commands, a screen, or manual correction. A more capable Siri should understand the intent, take the actions and confirm what it has done.
To borrow from the old political phrase, it is the ecosystem, not the glasses alone, that could make Apple’s version matter. This is where Apple has an advantage if it executes well. Siri could connect the glasses to the iPhone, Apple Watch, HomePod, Mac, Shortcuts, Apple Maps, Find My, Calendar, Messages and the Home app. Apple glasses would not need to replace those devices. They could become the voice-first access layer across them.
That is a much more interesting product than camera glasses alone.
Wearable Visual Intelligence matters because phones still require hands
Apple’s Visual Intelligence already points towards the same idea. Apple describes Visual Intelligence as a way to learn more about surroundings and on-screen content, including identifying places, interacting with text, creating calendar events from posters, identifying plants and animals, asking ChatGPT and searching Google or supported apps.
But on the iPhone, it still has a physical access problem.
Apple’s own iPhone guide says Visual Intelligence can be used by clicking and holding the Camera Control or Action button to search visually for objects around you. It can also be accessed through Control Centre or the Lock Screen.
That still assumes you can pick up the phone, point it at something and operate the device.
I cannot reliably do that.
So while Visual Intelligence may be useful in theory, the interaction model can make it inaccessible in practice. This is exactly where smart glasses could change the equation.
If the camera is already on my face, that barrier is much lower. I could ask, “what is this?”, “read that label”, “what does that sign say?”, or “is there a step ahead?” without needing to handle the phone first.
That matters because it turns visual intelligence from a phone feature into a wearable support tool.
Apple would need to be careful here. Real-time environmental guidance has safety implications, particularly for wheelchair users and visually impaired people. The system should not overclaim. But a cautious, well-designed assistant that can identify text, objects, entrances, obstacles and context could be genuinely useful.
The key point is simple: intelligence is only accessible if the method of using it is accessible too.

Apple glasses should become a voice-first control layer
The most important missing piece in Ray-Ban Meta, for me, is broader command of the devices and services around me.
Smart glasses should not only answer questions or take photos. They should help control the world around the wearer.
This is where Apple has a clear opportunity. The Apple Home app and Siri already support control of smart home accessories such as lights, thermostats and other connected devices. Smart glasses could make that control feel more natural.
Today, smart home voice control can still be rigid. You often need to remember exact device names or room names. That is workable, but not ideal.
Glasses could add context.
If I am looking towards a lamp, I should be able to say, “turn that off.” If I am near the door, I should be able to ask, “is it locked?” If I am in a room with several devices, Siri should understand what “this room” or “that one” means.
For disabled people, this is not just a futuristic convenience. It can be part of independent living. Lights, locks, blinds, heating, plugs and intercoms are not minor comforts when physical access is limited.
This is why Apple’s ecosystem matters. Meta has shown the value of hands-free glasses. Apple has the chance to connect that idea to the home, the phone, the watch, the Mac and the wider operating system.
Privacy must be built into the product
Smart glasses also raise obvious privacy concerns.
A camera on the face is different from a camera in the pocket. People nearby need to know when recording is happening. Wearers need to understand what is being processed, stored or sent to the cloud.
This will be especially important in care settings. If smart glasses are used by disabled people, care workers or others during personal routines, medical appointments or private conversations, privacy and consent must be treated as design issues from the start.
Apple will probably lean on its privacy reputation, and that may help. But trust cannot be assumed. It has to be designed into the product through clear indicators, understandable settings and strong limits on data use.
A wearable AI device that sees and hears the world must be useful, but it must also be accountable.
The real test for Apple smart glasses
Apple does not need its first smart glasses to be spectacular. It does not need to launch with a full augmented reality display. It does not need to replace the iPhone.
It needs to make the right first product.
Ray-Ban Meta has already proved that smart glasses can be normal, useful and sometimes life-changing. In my own life, they have shown how powerful deeply embedded hands-free voice control can be.
Apple’s task is to build on that foundation.
That means reliable Siri. Strong prescription support. Proper hands-free operation. Better smart home control. Wearable Visual Intelligence. Ecosystem-wide actions. Clear privacy. And accessibility built in from the beginning, not added later as a correction.
The question is not whether Apple can make attractive smart glasses. It almost certainly can.
The real question is whether Apple understands what makes this category important.
For some people, smart glasses will be another AI gadget. For others, they could become a practical, voice-first route through daily life.
If Apple gets that right, its first smart glasses may not need a display to be important. They may matter because they finally make the computer easier to reach without reaching at all.