// SYSTEM: DIGEST // LIVE
AI WORKFLOW
OPINION
TUTORIALS
ChatGPT
ChatGPT
William Smith
William
CONVERSATIONS WITH CODE

I Bought the Meta Glasses. Here's My Honest Take.

I own a pair of Meta Ray-Ban AI glasses and reach for them less than I expected. Here's what actually works, what doesn't, and what I'd need to change my mind.

Article Details Transparency Protocol v3.0
William 60%
Personal context and experience, editing, and final polish.
AI 40%
Structural formatting, initial prose drafting, synthesis from source material.
Stack: Claude Sonnet 4.6

Meta announced prescription versions of its Ray-Ban AI glasses this week, and people are talking about them. So, i thought it would be a good time to share some of my thoughts on AI glasses more generally, and the Meta Ray-Ban v2 glasses.

The concept is genuinely compelling.

I was sucked in by the marketing on these and have been using them for over a month, so now I feel more qualified to talk on the promise of AI glasses and the actual reward.

Why I bought the Meta Ray-Ban AI Glasses

I got them thinking I'd use them constantly for documenting my work.

I'm a creator who makes lots of videos, I'm always moving around, and hands-free capture while staying present in what I'm doing seemed like an obvious fit. I sure was hyped about it. The pitch I sold myself: capture ideas as they happen and no more interrupting the moment to reach for my phone (or use all of its battery).

What actually happened: they're in a case more than I'd like to admit.

Here's the specific thing that killed it for me.

I have the non-display version — which means when I'm shooting, I have no idea what the camera sees. I'm pointing my head at something and hoping the frame is right. For anyone doing intentional visual work, that's a real problem. You can't compose a shot if you can't see your shot.

The display versions solve this, but they're more expensive and have their own tradeoffs around weight and battery. It's not a free fix.

0:00
/0:21

First video taken with the Meta Ray-Ban AI Smart Glasses

What would make these glasses more useful

Three things I'd actually need before these become a daily driver:

The ability to choose which AI I'm talking to. Right now it's Meta AI, full stop. I want to be able to choose either Claude or Gemini based on what I'm doing. I find it rich that Meta complains about Apple's walled gardens but then makes their own. Sure, you can use whatever model you want if you just treat the glasses as a Bluetooth device but that also feels clunky in practice.

A viewfinder. I know adding a display adds weight and shortens battery life and raises the price. But without any visual feedback, the use cases for serious creative work are narrow.

And finally, the ability to shoot videos and images in landscape. Right now it's portrait only.

The prescription angle in this week's announcement is genuinely smart — removing the "I already wear glasses and I'm not wearing two pairs" objection is meaningful. A lot of people had that objection, including people who would otherwise be interested. But prescription or not, you're getting the same hardware underneath.

I still think wearable AI devices are the future.

The ambient, hands-free part of it actually works when the context is right. Walking around, doing something physical, having questions you want answered without stopping what you're doing — that's a legitimate use case. I just haven't found it consistently in my own work yet.

It's expensive for what it currently does. And it needs more work before I'd recommend it to a video creator without some serious caveats.

Maybe the display version changes the equation. I'll keep an eye on it.

← Back to Digest

I Bought the Meta Glasses. Here's My Honest Take.

I own a pair of Meta Ray-Ban AI glasses and reach for them less than I expected. Here's what actually works, what doesn't, and what I'd need to change my mind.

I Bought the Meta Glasses. Here's My Honest Take.
A photo-realistic image of the man with a beard and black cap from the input, wearing Meta Ray-Ban sunglasses and raising a finger to his temple. He is looking intently at a large, glossy black abstract knot sculpture sitting on a wooden plinth in an art gallery setting. The background is a bright white wall with other artworks blurred in the distance.
Article Details Transparency Protocol v3.0
William 60%
Personal context and experience, editing, and final polish.
AI 40%
Structural formatting, initial prose drafting, synthesis from source material.
Stack: Claude Sonnet 4.6

Meta announced prescription versions of its Ray-Ban AI glasses this week, and people are talking about them. So, i thought it would be a good time to share some of my thoughts on AI glasses more generally, and the Meta Ray-Ban v2 glasses.

The concept is genuinely compelling.

I was sucked in by the marketing on these and have been using them for over a month, so now I feel more qualified to talk on the promise of AI glasses and the actual reward.

Why I bought the Meta Ray-Ban AI Glasses

I got them thinking I'd use them constantly for documenting my work.

I'm a creator who makes lots of videos, I'm always moving around, and hands-free capture while staying present in what I'm doing seemed like an obvious fit. I sure was hyped about it. The pitch I sold myself: capture ideas as they happen and no more interrupting the moment to reach for my phone (or use all of its battery).

What actually happened: they're in a case more than I'd like to admit.

Here's the specific thing that killed it for me.

I have the non-display version — which means when I'm shooting, I have no idea what the camera sees. I'm pointing my head at something and hoping the frame is right. For anyone doing intentional visual work, that's a real problem. You can't compose a shot if you can't see your shot.

The display versions solve this, but they're more expensive and have their own tradeoffs around weight and battery. It's not a free fix.

0:00
/0:21

First video taken with the Meta Ray-Ban AI Smart Glasses

What would make these glasses more useful

Three things I'd actually need before these become a daily driver:

The ability to choose which AI I'm talking to. Right now it's Meta AI, full stop. I want to be able to choose either Claude or Gemini based on what I'm doing. I find it rich that Meta complains about Apple's walled gardens but then makes their own. Sure, you can use whatever model you want if you just treat the glasses as a Bluetooth device but that also feels clunky in practice.

A viewfinder. I know adding a display adds weight and shortens battery life and raises the price. But without any visual feedback, the use cases for serious creative work are narrow.

And finally, the ability to shoot videos and images in landscape. Right now it's portrait only.

The prescription angle in this week's announcement is genuinely smart — removing the "I already wear glasses and I'm not wearing two pairs" objection is meaningful. A lot of people had that objection, including people who would otherwise be interested. But prescription or not, you're getting the same hardware underneath.

I still think wearable AI devices are the future.

The ambient, hands-free part of it actually works when the context is right. Walking around, doing something physical, having questions you want answered without stopping what you're doing — that's a legitimate use case. I just haven't found it consistently in my own work yet.

It's expensive for what it currently does. And it needs more work before I'd recommend it to a video creator without some serious caveats.

Maybe the display version changes the equation. I'll keep an eye on it.

// LEXICON_CITY_DISPATCH_REQ
// STATUS: CONNECTION_STABLE
// SOURCE: CENTRAL_DISPATCH_HQ

SHERMAN UPLINK: "I'm at HQ holding down Central Dispatch. Enter your query below to pull relevant data records and I'll see what data cards we've recovered!"