Why I wore an AI-controlled Meta Ray-Ban camera on my face for a week

In this article, I share my experiences wearing an AI-powered Meta Ray-Ban camera for a week. I share how these smart glasses affected my daily interactions and reflect on the broader impact of wearable technology in our lives.

Profile picture of Mark
Mark Vletter
25 June 2025
12 min

A week with Meta and Ray-Ban’s ‘smart’ AI glasses

I feel like I’ve seen all the reviews of the AI glasses from Meta and Ray-Ban by now. So when the lady from Ray-Ban hands me this piece of smart technology, after some playing around, I am very quickly convinced when she says they are also selling these at SXSW today.

Did I swim in the sea with my previous sunglasses? Yes. Was that the reason to buy these new Ray-Ban sunglasses? No. Then what is the reason? The reason is that I am very curious about what smart lenses can do in an age of AI.

What is actually inside such smart glasses?

What is that anyway, smart glasses? These glasses from Ray-Ban have a camera, a button, two speakers, touch controls along the temples and, most importantly, very thick temples that hold the batteries. There is also a sensor that detects if you have the glasses on, there are two notification lights – one for yourself and one for the outside world – and some microphones. Finally, some Wi-Fi 6, Bluetooth and a glasses case with a larger battery, and the glasses are complete.

Put all that together and you get a pair of glasses that lets you take photos and videos and has all the functionality of a Bluetooth headset, including the ability to bring Meta AI to life with the code word “Hey Meta.”

The unboxing is easy, and with the accompanying app, you transfer photos and videos of your glasses on your smartphone. In addition to providing protection, the glasses case also allows you to charge the glasses three times.

Are these smartglasses remarkable?

Until now we’ve talked about practicalities. On to the big question: are they somewhat remarkable? To describe that, I have to start with some things that strike me. First, the glasses are heavy. Both in your face – you’re like Prince Bernard Jr. (a Dutch prince) – but also on your nose. At 49 grams, it carries twice the weight of the average pair of glasses.

By the way, the smart glasses cannot tell me that themselves. Because when I ask the glasses what I see in front of me, or how heavy a pair of glasses is on average, I get the message “the glasses cannot help with that”. Something that is strange, because version 15 of the software for the glasses is supposed to support this, but doesn’t seem to work in the Netherlands yet. That you get a software update for your glasses, by the way, is pretty wild.

Audio better than expected

Another thing to note is that the audio is much better than expected. I don’t expect to listen to a lot of music on these glasses, because the sensory overload of ambient noise combined with music I really can’t handle as an adhd’er. Where the audio can be used very well, however, is during a phone call. Especially since the microphones are very good. In the process, something else stands out. I am more aware of my surroundings, talk more softly, and much more noticeably: a barrier is gone. I am still part of the normal world, much more so than when wearing Bluetooth earbuds.

Glasses simply integrate so much better with the real world than earbuds or bringing a smartphone to your ear. Thus, technology seems to move more into the background. Something that appears to be a bigger issue when technology moves to the form factor of glasses.

There’s a big downside to that, by the way. Because I no longer have to reach for my smartphone to take a photo or video either. A voice command or pressing a button is enough to take the picture or start the video recording. And that super tiny glowing light in the corner of the glasses… no one is going to see that. That’s a privacy thingy as far as I’m concerned and something that’s going to have particular long-term implications.

Meta’s ecosystem as prison

Diving deeper into the software, other things stand out. The only AI I can opt for is Meta’s. So the command “OK Google” doesn’t work and you can forget about a conversation with ChatGPT. With that you also lose a lot of possible functionalities right away. Because things like starting a phone call or sending a message to my kids via Signal, that’s not in there. You can only use the apps that Meta connects to. And let that be only Meta’s apps. Your phone, WhatsApp (not WhatsApp Business), Facebook Messenger, Facebook itself and Instagram are the only supported tools your glasses can control via voice. In addition, the device functions as a Bluetooth device. So making calls, listening to YouTube videos and Spotify also works.

It also immediately shows the weakness of Meta’s glasses. Glasses from virtually any other vendor would want to deeply integrate with any other app. This gives you enormous freedom of choice and allows you to select the best app for each application. The glasses then act only as the hardware, operated by a set of software you select. But Meta is actually trying to become the new operating system with the glasses. Similar to what Google or Apple will always try as well. A dangerous development, as far as I’m concerned.

AI functions not yet available in the Netherlands

While Meta AI, as mentioned earlier, is not yet available in the Netherlands, with some “nerdery” you can still make use of the feature. For example, take a picture of a building and ask what you see, or ask ad hoc questions about, say, the euro pound exchange rate. I can also ask the latter to my Bluetooth headset, while for more information about a building I still have to open the Google Lens app on my smartphone. Again, you notice that the glasses relegate the technology more to the background.

But the number of times I actually use these features in daily life is extremely limited. That changes when you are traveling. Then the usage increases rapidly, but with that the battery level of the glasses also decreases rapidly. With slightly more regular use, you might get half a day. You charge the glasses 50 percent in 22 minutes. Not insurmountable, but the 75 minutes it takes to get to 100 percent is really too long. Especially if they are your primary prescription glasses or sunglasses and you have to do without them for an hour and a half.

Photos and videos: nice concept, limited quality

I enjoy photography and I am one of those people who still lug around a system camera. With that in mind, it’s good to take a look at the photo and video capabilities of the glasses.

​​Two things stand out right away. The first is that the camera only shoots portrait photos in a 3×4 format. The second is the point of view (POV) that glasses give by default. You get an “I can look into your life” feeling. The camera is ultra-wide and you have to make do with 12 megapixels, forgetting about RAW photos. The ultrawide mode also provides the usual distortion at the edges that you also know from your smartphone.

The pictures the glasses take are quite useful in a light-rich environment, but the camera becomes unusable in situations with less light or where the contrast between light and dark is high. For example, I try to take pictures of a keynote session where the presentation slides are white. This gives my smartphone more than enough light for good photos, but the automatic exposure of the glasses destroys what we camera people call highlights. These become white spots in which everything that was actually text disappears. This makes the glasses unusable in such situations.

Furthermore, you quickly notice that you have to take into account that the camera is located on the left side of the glasses. So when taking a mirror, you have to look at that side to avoid getting crazy off-center photos. And even if you want to photograph something in front of you, just looking at the object is not enough for a nice picture. The Meta AI app’s smart crop feature does help you find a bit of a straight horizon, though.

Video: that’s where the glasses shine

I think the AI Ray-Ban glasses shine when it comes to video. Being able to watch through someone else’s eyes is just a really nice concept. The video stabilization is strong, making the images smooth, even while walking or driving. That it is only portrait images is something I find unfortunate with both the photo and video options. I would have preferred a square sensor where you can choose whether to take a horizontal or vertical crop.

The audio recordings from the glasses are good, especially if you want to record your own voice. Ambient noise does not take over at that point. Is it quiet? Then ambient noise does record very well.

The real use cases: content creation and live streaming

The question is what are the real use cases of these glasses? That’s mostly on the content creation and social media side. Platforms like Twitch, YouTube and Instagram give live video preference in the algorithm over normal video. They do this because live competes with streaming platforms and television, thus keeping the eyes linked to the apps longer.

An unboxing, video review, or the live coverage of an event thus becomes a very personal experience because you literally experience it live through the eyes of the person who is broadcasting. Whether we should be happy about this, I’ll leave for now. Fortunately, you won’t keep streaming for hours, because the battery of the glasses quickly runs out when you go live.

My experience after using it for a week

What do I think of these? I have used the glasses for a week while traveling, a few days of inspiration at the tech event SXSW in London, a Dua Lipa concert and just at home and at work.

There are a couple of things that stick with me the most. The first is that when using the AI Ray-Ban glasses, the technology really fades into the background. Having a conversation through the glasses is more pleasant and feels more natural than on your phone or through your Bluetooth headset. You can also easily take a photo or video at the concert. This allows you to capture the atmosphere without staring at a screen all the time. This is something that gives a much better experience not only for yourself, but especially for those around you.

The second point is that you are more connected to the outside world. Watching a video in a public setting is substantially different with glasses than with closing Bluetooth headphones. Also for your approachability and contact with others.

The AI part remains another gimmick and doesn’t add much to everyday life for my liking. Besides, talking to glasses in public is just weird. It remains convenient that a keyboard on a smartphone is a silent interface that thereby does not disturb the environment and keeps the conversation you are having with your phone private.

Finally, the look and feel is still not that of normal glasses. The weight is considerably higher and you get the Prince Bernard Junior vibe as an undesirable extra.

A look – yeah, yeah – at the future: exciting and scary at the same time

What concerns me most is the future. I also tested Google Glass when it came out over a decade ago. And if you look at how much more advanced the technology is now, we are going to have an extraordinary ten years. Especially if you’ve also looked a bit at the acquisitions Meta has made in recent years in the field of prism technology, which turns the lenses of your glasses into transparent displays.

However, I think it is mainly waiting until the AI Ray-Ban smart glasses are truly indistinguishable from normal glasses before it is widely adopted. And with that, battery technology will also have to take another step. What will then become the added value of smart glasses and the killer feature, I don’t know. I can tell you that the current generation of smart glasses does not have that yet. But if your glasses become a screen while you can just be present in the normal world and thus the technology integrates even more with the real world, we will be entering a very different world.

The simple things are arrows in your glasses while walking or biking through town. Morning weather or news while you’re brushing your teeth. Interactive museums and historical stories on-demand while you’re strolling through a city. Or repair instruction overlays while you’re fixing your bike. When these things become hands-free and contextual, that all sounds even more valuable.

But it’s also a lot of information. I see a good chance that information overload is going to become a recognized disease. And the digital detox movement that you already see emerging with old cameras and the resurgence of both the LP and CD will only grow.

The behavior that is going to change everything

I find the behavioral aspect especially interesting and exciting. If everyone can always record or stream video and audio with the greatest of ease with, say, the AI Ray-Ban, and thus have access to real-time information about everything, it will touch the core of our humanity. Privacy will disappear, or at least be in need of a new definition. This development is going to call for new social codes. When I showed people less into technology the ease with which I could take a picture or video with the AI Ray-Ban, shock was the first reaction. And that is incredibly telling.

That’s going to require agreements before you start talking to each other. And I think recording-free zones are going to become very normal. Another extreme is that anonymity will eventually disappear, while there are very social situations where being anonymous is just very pleasant.

Are we ready for this?

Meta and Ray-Ban’s smart glasses are a fascinating device that shows where we are going. But it is not yet the product that will change the world. It’s too heavy, the battery life is too short and the AI features add too little. But the glasses – like Google Glass once did – do give a taste of something bigger.

More and more, technology is becoming not just something we use, but something we simply carry. No longer a device we pull out, but an extension of ourselves that is always present.

With that, the question is not “if” or “when,” the question is primarily: are we ready for this?

Mark keeps you updated

Subscribe to Mark’s newsletter and receive his monthly updates.