Unlock Editor's Digest without spending a dime
FT editor Roula Khalaf selects her favourite stories on this weekly newsletter.
OpenAI will launch an AI product that is claimed to be able to logical reasoning and might due to this fact solve difficult problems within the fields of mathematics, programming and natural sciences. This is an important step on the solution to developing human-like machines in cognitive performance.
The AI ​​models referred to as o1 are being hailed as an indication of the evolution of technological capabilities lately, as corporations race to develop ever more sophisticated AI systems. In particular, there’s a brand new race amongst technology giants similar to Google DeepMind, OpenAI and Anthropic to develop software that may act independently as so-called agents – personalized bots designed to assist people work, create or communicate higher and interact with the digital world.
OpenAI said the models will likely be integrated into ChatGPT Plus starting Thursday. They are intended for scientists and developers moderately than general users. The company said the o1 models far outperformed existing models similar to GPT-4o in a qualifying exam for the International Mathematical Olympiad, scoring 83 percent of the points there, while the latter only scored 13 percent.
According to Mira Murati, the corporate's chief technology officer, the models also opened up latest ways to grasp how AI works. “We get insight into the way in which the model thinks… we are able to observe its thought process step-by-step,” she told the Financial Times.
The latest models use a way called reinforcement learning to resolve problems. They take more time to investigate queries, making them dearer than GPT models, but are more consistent and complex of their answers.
“During this time, it tries out different strategies to reply your questions,” said Mark Chen, the project's lead researcher. “If it finds that it has made mistakes, it might probably correct them.”
For applications similar to online search, which OpenAI is experimenting with through its SearchGPT tool, this set of models could open up “a brand new search paradigm,” Murati says, enabling higher research and data retrieval.
Teaching computer software to reason step-by-step and plan ahead is, based on experts in the sector, a very important milestone in the event of artificial general intelligence – that’s, machines with human-like cognitive abilities.
If AI systems had real reasoning capabilities, it might enable “consistency of facts, arguments and conclusions drawn by AI (and) advances in AI agency and autonomy, probably the primary obstacles to AGI,” says Yoshua Bengio, a pc scientist on the University of Montreal who has won the celebrated Turing Award.
There has been regular progress on this area, Bengio says, with models similar to GPT, Google's Gemini and Anthropic's Claude showing early signs of reasoning capabilities. However, the scientific consensus is that AI systems wouldn’t have true, universal reasoning capabilities.
“The right solution to assess progress is thru independent assessments by scientists and academics without conflicts of interest,” he added.
Gary Marcus, professor of cognitive science at New York University and writer of , warned: “We have seen again and again claims about logical pondering which have collapsed under careful, patient scrutiny by the scientific community, so I’d approach any latest claims with skepticism.”
Bengio also identified that software with more advanced features carries a better risk of abuse within the hands of malicious actors. OpenAI said it has “stepped up” its security testing to reflect the advances, including giving independent UK and US AI security institutes early access to a research version of this model.
According to technologists, advances on this area will drive AI progress in the approaching years.
According to Aidan Gomez, CEO of AI startup Cohere and one among the Google researchers who helped develop the Transformer technology that underlies chatbots like ChatGPT, the performance of models has improved “dramatically” once they are taught to resolve problems.
Speaking at an FT event on Saturday, he said: “It's also significantly dearer since you spend a variety of computing time planning, pondering and considering before you really give a solution. So the models get dearer at that scale, but they're significantly higher at solving problems.”