Unlock Editor's Digest without spending a dime
Roula Khalaf, editor of the FT, picks her favorite stories on this weekly newsletter.
Meta CEO Mark Zuckerberg unveiled a prototype for lightweight augmented reality glasses called Orion on Wednesday as Big Tech's race to construct the following computing platform intensifies.
During a presentation at Meta's annual Connect conference, Zuckerberg said the glasses were the “most advanced” on the planet and marked the “culmination of a long time of groundbreaking inventions and almost one in all the best challenges the tech industry has ever faced.”
With holographic displays, the glasses can overlay 2D and 3D content over the true world and use artificial intelligence to research their content and “proactively” offer on-display suggestions to the wearer, Meta said. Its shares rose greater than 2 percent after the announcement.
The prototype is simply available for internal company use and for some developers to construct. To be ready for consumers, Zuckerberg acknowledged that the glasses would have to be “smaller,” “more fashionable,” with a sharper display and cheaper, adding that “we're maintaining a tally of all of those things.”
In a meta video showcasing the Orion technology, several Silicon Valley stars praised it, including Nvidia founder and CEO Jensen Huang and Reddit CEO Steve Huffman.
Meta bought virtual reality headset maker Oculus in 2014 and continues to develop fully immersive VR headsets under the Quest brand. The company announced its intention to construct AR glasses five years ago.
The race to develop smart AR “glasses” is increasing. Evan Spiegel's Snap, a smaller competitor to Meta, unveiled its latest version of its AR glasses last week. Meta and Snap hope to bypass Apple's operating system by betting that immersive glasses will sooner or later replace smartphones. Tech corporations have been attempting to develop AR wearables for years, comparable to the now-discontinued Google Glasses, but they’ve did not wow consumers.
Zuckerberg praised Orion's AI capabilities, saying the rapid development of huge language models, including Meta's Llama, has led to “a brand new AI-centric category of devices.” The chief executive said that along with integrating meta-AI and hand tracking, the Orion glasses would also use wristbands to receive signals from the body, including the brain, in a neural interface.
“The voice is great, but sometimes the thing is public and also you don't need to say what you're attempting to do. . . I believe you would like a tool that means that you can just send a signal out of your brain to the device,” Zuckerberg said.
Meta has previously invested significant resources in electromyography (EMG), a method that uses sensors to convert motor nerve signals traveling from the wrist to the hand into digital commands that may control a tool. EMG has been utilized in recent research to translate how the brain sends signals to the hands to perform actions comparable to tapping and swiping.
In 2019, Meta acquired CTRL-labs, a US startup that develops technologies that allow people to manage electronic devices with their brains, for about $1 billion.
The company also released latest product updates Wednesday based on its improvements in generative AI as tech corporations race to adopt the rapidly evolving technology. This included the newest version of its large language model, Llama 3.2, its “first large vision model” that may understand charts, graphs and documents.
“Now that Llama is on the forefront when it comes to capabilities, I believe we've reached a tipping point within the industry where it's beginning to develop into something like an industry standard or something just like the Linux of AI,” Zuckerberg said in regards to the latest model.
He said his meta AI chatbot, which uses Llama, is “heading in the right direction to develop into essentially the most used AI assistant” with 500 million monthly energetic users.
Meta also announced an updated version of its Ray-Ban smart glasses that lack AR displays but now allow users to research photos or videos in real time with a voice interface that leverages Meta AI. Other features included setting reminders, e.g. B. noticing a automobile parking space; phone numbers from documents; scanning QR codes; and translating live conversations into different languages.