If an article in The Information is to be believed, OpenAI's next big product announcement within the AI space is imminent this week.
The information reported on Tuesday that OpenAI plans to release Strawberry, an AI model that may effectively fact-check itself, in the following two weeks. Strawberry can be a standalone product but can be integrated with ChatGPT, OpenAI's AI-powered chatbot platform.
Strawberry is reportedly higher at programming and math problems than other high-profile generative AI models (including OpenAI's own GPT-4o). And it avoids a number of the pondering traps that typically trip up those models. But the improvements come at a price: Strawberry is alleged to be slow — pretty slow. Sources tell The Information that the model takes 10 to twenty seconds to reply a single query.
Granted, OpenAI will likely position Strawberry as a model for mission-critical tasks where accuracy is paramount. This could resonate with enterprises, lots of whom are frustrated by the restrictions of today's generative AI technology. Opinion poll This week, HR specialist Peninsula found that inaccuracies are a top concern for 41% of corporations researching generative AI, and Gartner predicts that a 3rd of all generative AI projects can be abandoned by the top of the yr as a consequence of barriers to implementation.
While some corporations may not mind the chatbot delay, I believe the common person will care.
Apart from hallucinatory tendencies, today's models are fast – incredibly fast. We have change into accustomed to it; the speed makes interactions feel more naturalactually. If Strawberry's “processing time” is indeed an order of magnitude longer than that of existing models, it can be difficult to avoid the impression that Strawberry is a step backwards in some respect.
That assumes the best-case scenario: that Strawberry consistently answers questions appropriately. If it remains to be error-prone, because the reporting suggests, the long wait times can be even harder to bear.
OpenAI is undoubtedly under pressure to deliver results while it pours billions into AI training and staffing efforts. Its investors and potential latest backers are hoping to see a return sooner reasonably than later, one imagines. But it's a mistake to release an immature model like Strawberry – and consider charging a fee. far more for that – doesn’t seem advisable.
I believe it could be wiser to let the technology mature a bit. As the race for generative AI gets tougher, OpenAI may not give you the chance to afford that luxury.
News
Apple introduces visual search: The camera control, the brand new button on the iPhone 16 and 16 Plus, can launch what Apple calls “visual intelligence” — mainly a reverse image search combined with some text recognition. The company is working with third parties, including Google, to enable search results.
Apple gives up on AI: Devin writes that lots of Apple's generative AI features are literally quite easy – contrary to what the corporate's bombastic marketing would have you suspect.
Audible trains AI for audiobooks: Audible, Amazon's audiobook business, said it can use AI trained on voices of skilled narrators to create latest audiobook recordings. Narrators can be compensated on a title-by-title basis for all audiobooks created using their AI voices, with royalties shared.
Musk denies Tesla-xAI deal: Elon Musk defended himself against a Wall Street Journal report that one in every of its corporations, Tesla, has discussed a revenue share with one other of its corporations, xAI, to leverage the latter's generative AI models.
Bing gets tools to remove deepfake secrets: Microsoft says it’s cooperating with StopNCII – a corporation that permits victims of revenge porn to create a digital fingerprint of explicit images, real or not – to assist remove non-consensual pornography from Bing search results.
Google’s “Ask Photos” launches: Google's AI-powered search feature Ask Photos rolled out late last week to pick out Google Photos users within the US. Ask Photos permits you to ask complex queries like “Show me one of the best photo from each of the national parks I've visited,” “What did we order at this restaurant last time?” and “Where did we camp last August?”
USA and EU sign AI treaty: At a summit last week, the US, UK and EU signed a treaty on AI safety drafted by the Council of Europe (COE), a world standards and human rights organization. The COE describes the treaty as “the primary international legally binding treaty designed to be certain that using AI systems is fully compatible with human rights, democracy and the rule of law.”
Research paper of the week
Every biological process relies on protein-protein interactions, which occur when proteins bind to at least one one other. “Binding proteins” – proteins that bind to specific goal molecules – have applications in drug development, disease diagnosis, and more.
But the production of binding proteins is commonly a laborious and dear undertaking – and carries the chance of failure.
In search of an AI-supported solution, Google’s AI lab DeepMind developed AlphaProteoa model that predicts which proteins bind to focus on molecules. Given a couple of parameters, AlphaProteo can output a candidate protein that binds to a molecule at a specified binding site.
In tests with seven goal molecules, AlphaProteo produced protein binders with 3- to 300-fold higher “binding affinity” (molecule binding strength) than previous binding discovery methods. In addition, AlphaProteo was the primary model to successfully develop a binder for a protein related to cancer and complications from diabetes (VEGF-A).
However, DeepMind admits that AlphaProteo failed on the eighth attempt at testing – and that strong binding will likely be only step one in creating proteins that might be useful for practical applications.
Model of the week
There is a brand new, extremely powerful model of generative AI – and anyone can download, optimize and run it.
The Allen Institute for AI (AI2) along with the startup Contextual AI developed a text-generating English-language model called OLMoEwhich has a 7 billion parameter mixed-of-experts (MoE) architecture (“parameters” roughly correspond to a model’s problem-solving capabilities, and models with more parameters generally – but not at all times – perform higher than those with fewer parameters.)
MoEs break down data processing tasks into subtasks after which delegate them to smaller, specialized “expert” models. They aren’t latest. But what makes OLMoE notable – other than the indisputable fact that it has an open license – is that it outperforms many models in its class across a variety of applications and benchmarks, including Meta's Llama 2, Google's Gemma 2, and Mistral's Mistral 7B.
Several variants of OLMoE and the info and code used to create them can be found on GitHub.
Grab bag
This week was Apple week. The company held an event on Monday where it announced latest iPhones, Apple Watch models, and apps. Here's a recap in case you couldn't make it.
Apple Intelligence, Apple's suite of AI-powered services, was shown off on TV as expected. Apple reiterated that ChatGPT can be integrated into the experience in several key ways. But oddly, there was no mention of AI partnerships beyond the previously announced OpenAI deal — although Apple had calmly hinted at such partnerships earlier this summer.
In June, at WWDC 2024, SVP Craig Federighi confirmed Apple's plans to work with more third-party models in the long run, including Google's Gemini. “There's nothing to announce in the mean time,” he said, “but that's our general direction.”
Since then there was radio silence.
Maybe it's taking longer than expected to finish the needed paperwork – or there's been a technical setback. Or possibly Apple's potential investment in OpenAI is making some model partners uneasy.
Anyway, it appears that evidently ChatGPT can be the one third-party model in Apple Intelligence for the foreseeable future. Sorry, Gemini fans.