Meta Ray-Bans Live AI hands-on: a solution looking for a problem

0
6


Here’s the scene: I’m wearing Meta’s latest Ray-Ban glasses with a new feature called Live AI, which can answer questions about the world around you. I’m preparing for a four-hour road trip to my in-laws for the Christmas holiday. I’m preplanning the next day’s breakfast because I’m 99.9 percent certain I will have no brain cells to concoct an edible one at 5AM. I don’t even know if I have anything to make a meal with. I open the fridge door and say, “Hey Meta, start Live AI.” Suddenly, John Cena’s voice is in my ear telling me that a Live AI session has begun.

“What breakfast can I make with the ingredients in my fridge?” I ask.

The inside is a sad sight with month-old Thanksgiving leftovers, a carton of eggs, soda, condiments, a tub of Greek yogurt, and a big jug of maple syrup. Meta-AI-as-John-Cena replies that I can make a “variety of breakfast dishes,” such as “scrambled eggs, omelets, or yogurt parfaits.” 

To be clear, there is not a single fresh fruit with which to make a parfait. The egg carton has two eggs in it. My spouse put an empty milk carton back in the fridge, meaning scrambled eggs and omelets are also out. My stomach rumbles, reminding me I skipped lunch. I bail on the breakfast idea and instead open the freezer door and ask what kind of dinner I can make with the ingredients inside. It’s mostly a bunch of frozen pizzas, an assortment of frozen veggies, and hamburger buns. I’m told, “frozen meals, stir-fries, and casseroles.” 

I decide to order in for dinner. It’ll be a drive-through breakfast on the road.

This is the issue with Live AI. More often than not, I don’t know when to use it. When I do, the answers I get are too obvious to be helpful. 

Photo by Amelia Holowaty Krales / The Verge

The pitch for Live AI is it allows you to speak to an AI assistant as you would a friend. While it’s similar in function to the glasses’ multimodal AI feature, you don’t have to constantly prompt the AI. It (supposedly) knows when you’re talking to it. You can also string together multiple queries and follow-up questions. If you’re in a cooking class and something looks a bit off, you’d flag the instructor and they’d look at the mess in your pan and tell you what you did wrong and how to fix it. This is kind of meant to be a version of that but with an incorporeal AI that lives in your glasses. It sees what you see and can help you out in real time. 

It’s a cool concept. But I was stumped when it came time to use Live AI without guardrails. Whenever a question pops into my head, I automatically reach for my phone. That’s what I’ve been trained to do for over 10 years. The first and biggest hurdle to using Live AI was remembering it was an option. 

Damn, I would’ve never thought of reading a book on my TBR list.
Screenshot: Meta

The second problem was knowing when Live AI might be more useful than a quick Google search. Meta suggested I try scenarios involving fashion and cooking. I already told you how my cooking queries went. So, I asked the AI what color combinations I should try with a set of multicolored pastel press-on nails.

The AI suggested a “combination of pastel colors” would “complement the pink nails nicely.” I asked which of the books on my shelf I should read. The AI reminded me it “doesn’t have personal preferences or opinions” but that I should “read a book that interests [me] or one that [I’ve] been meaning to read for a while.” Dissatisfied, I asked which of the books was most highly acclaimed. It suggested I look that up online. I tried a few more scenarios and was left wondering: why would I ever talk to AI if all it does is restate the obvious and tell me to Google things myself? 

The most useful experience I had with Live AI was when I asked it how to zhuzh up my home office. At first, I got another milquetoast answer — add artwork, plants, and rearrange the furniture to create a more cozy atmosphere. Annoyed, I asked it what type of artwork would look good. Again, it told me that “a variety of artwork” could look good “depending on [my] personal style.” Had I considered adding posters, prints, or paintings that reflected my interests or hobbies? I wanted to scream, but instead, I asked what style of poster would look good based on what was currently in the room. To that, I got my first somewhat useful answer: a colorful and playful poster with a fun design or cute character that would complement the stuffed animals in the room. I asked for artists to look into. It suggested Lisa Congdon, Camille Rose Garcia, and Jen Corace for their “playful and whimsical styles.” 

1/4

Journey with me as I figure out how to ask Meta AI the right questions to get the answer I wanted.

And herein lies the biggest recurring problem I have with AI: you have to know how to ask the right questions to get the answer you want. 

I could’ve saved myself some grief if I’d just told Meta AI, “I want to hang artwork in my room. Based on what’s currently here, what artists should I look into?” This skill comes naturally to some folks. My spouse is a whiz at prompting AI. But for the rest of us, it’s a skill that has to be learned — and few people right now are teaching us AI noobs how to rewire our brains to best make use of this tech.

Photo by Amelia Holowaty Krales / The Verge

After Googling the artists Meta AI suggested, I was left back at square one. I liked their art, but none of them felt like my style. I relayed the experience to my best friend, who rolled her eyes and promptly sent me three artists on Instagram. I loved all of them. In a chiding voice, she said I should’ve just asked her and not bothered with a bot. Because, unlike Meta AI, she said, she actually knows me.

Live AI has other issues outside of the philosophical ones. It struggles to differentiate when you’re talking to it versus someone else in the room. At one point, it straight up lied and said it’d witnessed me feed my cat when I hadn’t. (It’d been confused by my spouse saying they’d fed the kitties.) It also only works in 30-minute windows before the battery runs out. That means you have to be intentional in how you use it — a hard thing to do when there are few obvious use cases.

I’m not against Live AI. The overarching vision is for all of us to be like Tony Stark, wearing cool glasses with their own little Jarvises in them. When you’re being hand-held through controlled demos, that future feels both inevitable and magical. It’s just that the fantasy starts to crack when you’re left to explore on your own. And once that happens, nine out of 10 times, you’re going to reach for your phone.

LEAVE A REPLY

Please enter your comment!
Please enter your name here