The battery, for real this time
Let me start with what works: battery life. The Ray-Ban Meta smart glasses Gen 2 claims up to 2× the battery life over Gen 1. In my use — a few photos, a video, message readouts, a couple of calls — I got through most of the day without panic about dying power. For someone with a disability who depends on the glasses to stay connected, that’s a game-changer: on Gen 1, if the battery died mid-day and I couldn’t swap or charge the glasses myself, all communication would fall silent. Now, that risk is much lower.
Video capture has also been boosted: the Ray-Ban Meta smart glasses Gen 2 supports 3K Ultra HD video capture (though photos remain largely unchanged). That’s a meaningful step forward for content capture use cases, whether for vlogging, POV clips, or sharing daily life hands-free.
The Ray-Ban Meta smart glasses Gen 2 start at £379 and are available now through the Meta Store and authorised retailers.
AI still feels a little raw
One place where expectations are higher than reality is the built-in AI. It’s responsive enough for basic tasks — message readouts, voice commands — but it doesn’t always feel “smart” in the way we hope. There are occasional delays, mis-recognitions, or awkward phrasing. The AI is useful, but not seamless.
Meta hasn’t radically changed the AI backend here. The glasses don’t (yet) incorporate deep context awareness or multimodal memory in a way that consistently makes them feel anticipatory. That said, Meta has announced feature upgrades that could help close that gap.
Upcoming features to watch
Meta has revealed plans for two major software features in future updates: slo-mo mode and conversation focus.
- Slo-mo mode: This will allow the Ray-Ban Meta smart glasses Gen 2 to record video at slowed frame rates. For users who want to capture movements or moments with more artistic or documentary flair, slo-mo offers new creative flexibility. It won’t change the core accessibility, but for someone capturing surroundings or an event, it enhances visual storytelling.
- Conversation focus: This is more directly relevant to accessibility. Conversation focus uses the open-ear speakers and microphone array to amplify the voice of whomever you’re speaking with, while suppressing ambient noise. In effect, it helps isolate the person you’re talking to in noisy environments like cafés or busy streets. For many users, this could act like a “situational hearing assist” — not a replacement for hearing aids, but a tool to reduce listening fatigue.
These features remain pending as of now, but their outlines are promising.
Feature limitations and current app support
To be clear, the core accessibility features remain the same as Gen 1. There is no set of brand-new accessibility tools launched in Gen 2. Instead, improvements come from extended uptime and the promise of future software enhancements.
If you’d like to understand why these features were such a breakthrough in the first place, see my Ray-Ban Meta Gen 1 review.
Currently, app support is still relatively limited. You can:
- Play music via Apple Music, Amazon Music, and Spotify
- Receive phone calls and SMS/text messages
- Manage Google Calendar appointments (in the US)
- Use WhatsApp, Facebook, Instagram and Messenger for messages and video calls
- Stream video capture, upload to social, etc.
- Play audio books via Audible integration
But you can’t natively, today, use many other apps with deep integration (e.g. smart home control, advanced productivity apps). The ecosystem is limited by what Meta supports.
That said — there is reason for optimism. Meta has announced the Wearables Device Access Toolkit (SDK / Developer Toolkit). This is intended to let third-party developers build integrations with the camera, microphone, and open-ear audio on the glasses.
- The toolkit will launch in preview later this year, with general availability in 2026.
- At least initially, the SDK won’t include direct Meta AI integration — developers will need to stream audio and video via their own models or services.
- The roadmap does not yet guarantee support for smart home control, but the opening of the toolkit means that possibility exists — if developers or communities build it.
So if you hope to see smart home control (e.g. controlling lights, thermostats, locks) run via the glasses, that’s not built in now — but the SDK is the first step toward enabling it.
My wishlist for future improvements
- A dedicated accessibility section in the Meta app for physical and motor disabilities: At the moment, accessibility is represented mainly by Be My Eyes for visually impaired users. Apple calls its equivalent section “Physical and Motor.” Meta could create a “Voice-First Access” category to highlight relevant features.
- Voice command to end calls: A simple “Hey Meta, hang up” would be invaluable went on a call. If you can’t use your hands and a spam call comes in, you shouldn’t be trapped with no way to end it.
- Faster reply flows in messaging: Remove the need to say “Reply.” After a message readout, allow an auto-listen mode for 2–3 seconds so users can simply dictate back. Offer this option as a toggle in settings.
- Optional read-back toggle: For short replies like “See you soon,” there’s no need for the glasses to read back the text and ask for confirmation. Let AI decide dynamically, or offer a toggle.
- Command to read recent messages: A simple “Hey Meta, read my recent messages” would reduce friction.
- Smart home integration: Partner with Ewelink or similar platforms so the glasses can trigger smart home scenes like opening doors. These already work with Siri Shortcuts — why not Meta AI too?
- Hands-free live WhatsApp POV streaming: Enable live video broadcasting from the glasses with no opening phone and touch required.
- Emoji support in dictation: Fun and expressive, this has long been a Siri feature. It should be here too.
- Geo-fenced message readouts: Allow automatic message readouts in certain locations, e.g. not at home but when away from home.
- Fit adjustment options: People who can’t use their arms due to disability don’t have a plan B when glasses slip. Adjustable fits or accessories could help solve this. There is nothing in the Gen 2 that improves fit.
These aren’t niche extras — they would improve the experience for everyone, while making life vastly easier for disabled people.
The bottom line
Here in this Ray-Ban Meta smart glasses gen 2 review, the truth is this:
- The battery life upgrade is the only truly dramatic change and it matters, deeply.
- AI feels competent but not magical.
- The accessibility base remains strong, but the real leap is in reliability and endurance.
- The announced features — slo-mo and conversation focus — could add meaningful value (especially conversation focus for communication) if they’re well executed.
- The SDK opening is a hopeful signal, and smart home control is not off the table — but not here yet.
- The wishlist above points to practical steps that would make these glasses more inclusive and truly voice-first.
If you’re someone like me, for whom smart glasses are a part of daily communication, Gen 2 is the version to opt for— the battery gives you peace of mind. But don’t expect a flood of new capabilities — instead, think of this as solid platform maintenance with a path toward richer future functions. .
Ray-Ban Meta smart glasses Gen 2: Pros and cons
Pros
- Battery life roughly doubled compared to Gen 1
- New 3K Ultra HD video capture
- Classic Ray-Ban design with discreet smart features
- Reliable hands-free communication (calls, messages, WhatsApp, Messenger)
- Accessibility benefits from existing hands-free voice features carried over
Cons
- AI still feels limited and occasionally clunky
- Photos largely unchanged from Gen 1
- App integration is restricted (no smart home yet)
- Comfort and fit issues remain for long-term wear
- Future upgrades (slo-mo, conversation focus) not available at launch