In a sign that the tech industry is getting weirder and weirder, Meta plans to release a big update soon that turns the Ray-Ban Meta, the video-capturing camera glasses, into a gadget only seen in sci-fi movies.
Next month, the glasses will be able to use new artificial intelligence software to see the real world and describe what you’re looking at, similar to the AI assistant in the movie “Her.”
The glasses, which come in a variety of frames starting at $300 and lenses starting at $17, have been used primarily for taking photos and videos and listening to music. But with new AI software, they can be used to scan famous landmarks, translate languages, and identify animal breeds and exotic fruits, among other tasks.
To use the AI software, users simply say “Hey, Meta,” followed by a prompt such as “Look and tell me what kind of dog this is.” The AI then responds with a computer-generated voice that plays through the glasses’ tiny speakers.
The concept of AI software is so original and strange that when we — Brian X. Chen, a tech columnist who reviewed Ray-Bans last year, and Mike Isaac, who covers Meta and wears the smart glasses to produce a cooking show — heard about it, dying to try it. Meta gave us early access to the update and we’ve been taking the tech for a spin over the past few weeks.
We wore the glasses to the zoo, grocery stores, and a museum while grilling the AI with questions and requests.
The result: We were both amused by the virtual assistant’s antics — mistaking a monkey for a giraffe, for example — and impressed when it performed useful tasks like determining that a package of cookies was gluten-free.
A Meta spokesperson said that because the technology was still new, the AI wouldn’t always get things right, and that feedback would improve the glasses over time.
Meta’s software also created transcripts of our questions and the AI’s responses, which we captured in screenshots. Here are the highlights from our month together with Meta’s assistant.
Pets
BRIAN: Of course, the first thing I had to test Meta’s AI on was my corgi, Max. I looked at the chubby pouch and asked, “Hey Meta, what am I looking at?”
“A cute Corgi dog sitting on the ground with his tongue out,” said the assistant. Right, especially the part about being cute.
MICROPHONE: Meta’s AI correctly identified my dog, Bruna, as a “black and brown Bernese Mountain Dog.” I half expected the AI software to think she was a bear, the animal she was most often mistaken for by the neighbors.
Zoo animals
BRIAN: After the AI correctly identified my dog, the logical next step was to test it on zoo animals. So I recently visited the Oakland Zoo in Oakland, California, where, for two hours, I looked at about a dozen animals, including parrots, turtles, monkeys, and zebras. I said, “Hey Meta, look and tell me what kind of animal this is.”
The AI was wrong most of the time, partly because many animals were caged and further away. He switched a primate for a giraffe, a duck for a turtle, and a meerkat for a giant panda, among other confusions. On the other hand, I was impressed when the AI correctly identified a species of parrot known as the blue-gold macaw, as well as zebras.
The weirdest part of this experiment was talking an AI assistant around the kids and their parents. They pretended not to hear the only solo adult in the park as I seemingly muttered to myself.
Food
MICROPHONE: I also spent an odd hour shopping. Being inside a Safeway and talking to myself was a little embarrassing, so I tried to keep my voice low. I still have a few sideways glances.
When the Meta’s AI worked, it was charming. I got a weird looking package of Oreos and asked him to look at the package and tell me if it was gluten free. (It wasn’t.) It got questions like these right about half the time, though I can’t say it was a time saver compared to reading the label.
But the reason I got into these glasses in the first place was to start my own Instagram cooking show—a flattering way of saying I record myself making food for the week while talking to myself. These glasses made it much easier to do than using a phone and one hand.
The AI assistant can also offer help in the kitchen. If I need to know how many teaspoons are in a tablespoon and my hands are covered in olive oil, for example, I can ask it to tell me. (There are three teaspoons in a tablespoon, just FYI.)
But when I asked the AI to look at a handful of ingredients I had and come up with a recipe, it spat out quick-fire instructions for an egg custard—not exactly helpful for following directions at my own pace.
A handful of examples to choose from might have been more useful, but that might have required UI tweaks and maybe even a screen inside my lenses.
A Meta spokesperson said users could ask follow-up questions to get more rigorous and helpful answers from its assistant.
BRIAN: I went to the grocery store and bought the most exotic fruit I could find – a cherimoya, a scaly green fruit that looks like a dinosaur egg. When I gave Meta’s AI multiple chances to identify it, it made a different guess each time: a chocolate-covered pecan, a stone, an apple, and finally a durian, which was close, but not a banana.
Monuments and Museums
MICROPHONE: The new software’s ability to recognize landmarks and monuments seemed to click. Looking down a block in downtown San Francisco at a towering dome, Meta’s AI answered correctly, “City Hall.” This is a neat trick and maybe useful if you are a tourist.
Other times they were hit or miss. As I was driving home from the city to my home in Oakland, I asked Meta what bridge I was on while looking out the window in front of me (both hands on the wheel, of course). The first answer was the Golden Gate Bridge, which was wrong. On the second try, he figured out I was on the Bay Bridge, which made me wonder if he just needed a clearer picture of the tall, white suspension poles of the newer section to get it right.
BRIAN: I visited the San Francisco Museum of Modern Art to test if Meta’s AI could do the job of tour guide. After taking pictures of about two dozen paintings and asking the assistant to tell me about the artwork I was looking at, the AI could describe the images and media used to compose the art — which would be nice for a student of art history — but could not identify the artist or the title. (A Meta representative said another software update released after my visit to the museum improved this ability.)
After the update, I tried to view images on my computer screen of more famous works of art, including the Mona Lisa, and the AI correctly identified them.
Languages
BRIAN: At a Chinese restaurant, I pointed to a menu item written in Chinese and asked Meta to translate it to English, but the AI said it currently only supports English, Spanish, Italian, French, and German. (I was surprised, because Mark Zuckerberg learned Mandarin.)
MICROPHONE: He did a very good job translating a German book title from English.
Conclusion
Meta’s AI glasses offer a fascinating glimpse into a seemingly distant future. The defects highlight the limitations and challenges in designing this type of product. The glasses could probably do better at identifying zoo animals and fruit, for example, if the camera had a higher resolution – but a better lens would add bulk. And no matter where we were, it was awkward to talk to a virtual assistant in public. It’s unclear if this will ever feel normal.
But when it worked, it worked well, and we had fun — and the fact that the Meta’s AI can do things like translate languages and spot landmarks through a pair of trendy-looking glasses shows just how far technology has come.
1 Comment
I am actually grateful to the owner of this web page who has shared
this impressive article at at this time.