HomeArtificial IntelligenceWill people really pay $200 a month for OpenAI's recent chatbot?

Will people really pay $200 a month for OpenAI's recent chatbot?

On Thursday, OpenAI released a chatbot that practically costs $200 a month — and the AI ​​community wasn't sure what to make of it.

The company's recent ChatGPT Pro plan grants access to “o1 Pro mode,” which OpenAI says “uses more processing power for one of the best answers to the hardest questions.” The o1 Pro mode is a souped-up version of OpenAI's o1 reasoning model and is meant to reply questions on science, math and coding “more reliably” and “more comprehensively”, says OpenAI.

Almost immediately, people began drawing unicorns:

And design a “crab-based” computer:

And poetically concerning the meaning of life:

But lots of people up

“Has OpenAI shared specific examples of prompts that fail in regular o1 but reach o1-pro?” asked The British computer scientist Simon Willison. “I would like to see a single concrete example that shows its advantage.”

That's a good query; After all, that is the costliest chatbot subscription on this planet. The service offers other advantages comparable to removal of rate limits and unlimited access to OpenAI's other models. But $2,400 per yr isn't chump change, and the o1 Pro mode's value proposition specifically stays unclear.

It didn't take long to seek out bug cases. O1 Pro mode has problems with Sudoku and is marred by an optical illusion joke that is clear to each human.

OpenAI's internal benchmarks show that the o1 Pro mode performs only barely higher than the usual o1 on coding and math problems:

Photo credit:OpenAI

OpenAI conducted a “more rigorous” evaluation of the identical benchmarks to exhibit the consistency of the o1 Pro mode: the model was only considered to have solved an issue if it gave the right answer 4 out of 4 times. But even in these tests, the improvements weren't dramatic:

OpenAI o1 Pro mode
Photo credit:OpenAI

OpenAI CEO Sam Altman, who once wrote that OpenAI was on one Away “Towards intelligence too low cost to measure” was forced clear up several just on Thursday that ChatGPT Pro isn't suitable for most individuals.

“Most users will likely be very pleased with the o1 within the (ChatGPT) Plus tier!” he told X. “Almost everyone will likely be best served with our free tier or the Plus tier.”

So who’s it for? Are there really people on the market willing to pay $200 a month to ask toy questions like “Write a three-paragraph essay about strawberries without using the letter “e.”” or “Solve this Math Olympiad problem“? Will they happily part with their hard-earned money without the massive guarantee that the usual O1 won't give you the option to satisfactorily answer the identical questions?

I asked Ameet Talwalkar, associate professor of machine learning at Carnegie Mellon and enterprise partner at Amplify Partners, for his opinion. “It seems to me to be an enormous risk to extend the value tenfold,” he told TechCrunch via email. “I feel in only a couple of weeks we can have a significantly better sense of how big the necessity for this functionality is.”

UCLA computer scientist Guy Van den Broeck was more open in his assessment. “I don’t know if the value is smart,” he told TechCrunch, “and if expensive reasoning models will likely be the norm.”

A generous assumption is that it's a marketing mistake. Describing o1 Pro mode as best suited to solving “the hardest problems” doesn’t mean much to potential customers. Neither vague statements about how the model can “think longer” and exhibit “intelligence”. As Willison points out, without concrete examples of this supposedly improved performance, it's hard to justify paying more in any respect, let alone ten times the value.

As far as I can tell, the target market is experts in specialized areas. OpenAI plans to offer a handful of medical researchers at “leading institutions” free access to ChatGPT Pro, which is able to include o1 Pro mode. Errors play an enormous role in healthcare, and as Bob McGrew, former research director at OpenAI, said: noted On X, higher reliability is probably the predominant good thing about o1 Pro mode.

McGrew too thought of it The o1 Pro mode is an example of what he calls “intelligence overhang”: users (and possibly the model creators) have no idea easy methods to reap the benefits of “extra intelligence” as a result of fundamental limitations of an easy, text-based interface . As with OpenAI's other models, the one approach to interact with o1 Pro mode is thru ChatGPT, and – to paraphrase McGrew – ChatGPT isn't perfect.

However, additionally it is true that $200 raises high expectations. And judging by the early response on social media, ChatGPT Pro isn't a slam dunk.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read