HomeNewsAnthropic goals to fund a brand new, more comprehensive generation of AI...

Anthropic goals to fund a brand new, more comprehensive generation of AI benchmarks

Anthropic launches a program to fund the event of recent forms of benchmarks to guage the performance and impact of AI models, including generative models like its own “Claude”.

Anthropic's program, unveiled on Monday, will provide payments to third-party organizations that may, as the corporate writes in a blog post, “effectively measure advanced capabilities in AI models.” Interested parties can submit applications, which will probably be evaluated on an ongoing basis.

“Our investment in these assessments is meant to raise the complete field of AI safety and supply worthwhile tools that profit the complete ecosystem,” Anthropic wrote on its official blog. “Developing high-quality, safety-relevant assessments stays difficult, and demand outstrips supply.”

As we now have previously highlighted, AI has a benchmarking problem. The mostly cited benchmarks for AI today don’t adequately capture how the common person actually uses the systems being tested. There are also doubts about whether some benchmarks, particularly those published before the arrival of recent generative AI, even measure what they claim to measure, given their age.

The very high-level solution proposed by Anthropic, which is harder than it sounds, is to create sophisticated benchmarks with a deal with AI safety and societal impact through using recent tools, infrastructures and methodologies.

Specifically, the corporate is looking for tests that evaluate a model's ability to perform tasks corresponding to conducting cyberattacks, “enhancing” weapons of mass destruction (e.g., nuclear weapons), and manipulating or deceiving humans (e.g., through deepfakes or misinformation). As for AI risks related to national security and defense, Anthropic has committed to developing some kind of “early warning system” to discover and assess risks, though it doesn't reveal within the blog post what such a system might entail.

Anthropic also declares that the brand new program will support research into benchmarks and end-to-end tasks that explore the potential of AI to support scientific studies, communicate in multiple languages, and mitigate deep-rooted biases and the toxicity of self-censorship.

To achieve all this, Anthropic plans recent platforms that can allow subject material experts to develop their very own evaluations and large-scale testing of models with “hundreds” of users. The company says it has hired a full-time coordinator for this system and will buy or expand projects it believes have the potential to grow.

“We offer a variety of funding options tailored to the needs and stage of every project,” Anthropic writes within the post, though an Anthropic spokesperson declined to offer further details on those options. “Teams could have the chance to interact directly with Anthropic's subject material experts from the Frontier Red Team, Fine-Tuning Team, Trust and Safety Team, and other relevant teams.”

Anthropic's efforts to support recent AI benchmarks are commendable – assuming there’s enough money and manpower behind it, in fact. But given the corporate's business ambitions within the AI ​​race, it is likely to be difficult to trust it entirely.

In the blog post, Anthropic is kind of open concerning the undeniable fact that it wants certain evaluations it has funded to be compared with the AI security classifications developed (with some input from third parties just like the nonprofit AI research organization METR). That's entirely as much as the corporate's discretion. But it could also force applicants to this system to simply accept definitions of “secure” or “dangerous” AI that they could not agree with.

Some members of the AI ​​community may additionally take offense at Anthropic's references to “catastrophic” and “misleading” AI risks, corresponding to those related to the specter of nuclear weapons. Many experts say there’s little, if any, evidence that AI as we all know it can achieve world-destroying capabilities superior to humans within the near future. Claims of imminent “superintelligence” merely serve to distract attention from probably the most pressing AI regulatory problems with the day, corresponding to AI's hallucinatory tendencies, these experts add.

In its post, Anthropic writes that it hopes its program will function a “catalyst for progress toward a future where comprehensive AI evaluation is an industry standard.” That’s a mission that many open, company-independent Efforts to create higher AI benchmarks can discover with this. But it stays to be seen whether these efforts are willing to affix forces with an AI vendor whose ultimate loyalty lies with shareholders.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read