HomeArtificial IntelligenceLiveBench is an open LLM benchmark that uses contamination-free test data and...

LiveBench is an open LLM benchmark that uses contamination-free test data and objective assessments

A team from Abacus.AI, New York University, Nvidia, the University of Maryland and the University of Southern California has developed a brand new standard which addresses “serious limitations” faced by established corporations within the industry. LiveBenchit’s a general LLM benchmark that gives test data freed from impurities that usually occur in a dataset when multiple models use it for training purposes.

What is a benchmark? It is a standardized test to judge the performance of AI models. The assessment consists of a set of tasks or metrics against which LLMs could be measured. It gives researchers and developers something to check performance against, helps track progress in AI research, and more.

LiveBench uses “ceaselessly updated questions from current sources, routinely scores answers using objective ground truth scores, and features a wide selection of difficult tasks within the areas of math, coding, logical reasoning, language, following instructions, and data evaluation.”

The LiveBench release is especially notable because one in all the contributors is Yann LeCun, a pioneer on the planet of AI, Meta's chief AI scientist, and someone who recently got right into a dispute with Elon Musk. Also included are Colin White, head of research at Abacus.AI, and research scientists Samuel Dooley, Manley Roberts, Arka Pal, and Siddartha Naidu; Siddhartha Jain, senior research scientist at Nvidia; and academics Ben Feuer, Ravid Shwartz-Ziv, Neel Jain, Khalid Saifullah, Chinmay Hegde, Tom Goldstein, Willie Neiswanger, and Micah Goldblum.

“Like many locally, we knew we would have liked higher LLM benchmarks because the prevailing ones didn't match our qualitative experience with LLMs,” Goldblum says in an email to VentureBeat. “This project began with the initial thought that we must always create a benchmark where different questions are regenerated every time a mode is assessed, making it not possible to contaminate the test set. I chatted with Colin and Samuel from Abacus.AI and eventually built this thing out with funding and support from Abacus.AI into way more than we had originally imagined. We joined forces with folks from NYU, Nvidia, USC, and in addition the parents on the University of Maryland who had been excited about class tracking, and the project became a giant team effort.”

LiveBench: What you might want to know

“With the increasing importance of huge language models (LLMs), it has develop into increasingly clear that traditional machine learning benchmark frameworks are not any longer sufficient to judge latest models,” the team explains in a published white paper (PDF). “Benchmarks are typically published on the Internet, and most up-to-date LLMs incorporate large portions of the Internet into their training data. If the LLM has seen the questions of a benchmark during training, its performance on that benchmark will likely be artificially inflated, making many LLM benchmarks unreliable.”

The authors of the white paper claim that while benchmarks using LLM, or human prompting and assessment, have gotten increasingly popular, disadvantages include being liable to errors and unconscious bias. “LLMs often favor their very own answers over other LLMs, and LLMs prefer more detailed answers,” they write. And human assessors should not immune, either. They can introduce biases, resembling those regarding output formatting and the tone and ritual of writing. In addition, humans could influence how questions are generated by offering less diverse queries, favoring certain topics that don’t test a model's general capabilities, or just writing poorly worded prompts.

“With static benchmarks, the rule of honor applies: anyone can train on the test data and claim they achieved one hundred pc accuracy. However, the community generally doesn't cheat an excessive amount of, so static benchmarks like ImageNet or GLUE have been invaluable up to now,” Goldblum explains. “LLMs introduce a serious complication. To train them, we search large swathes of the Internet without human supervision, so we don't really know the contents of their training set, which might include test sets from popular benchmarks. This implies that the benchmark not measures the LLM's general capabilities, but its retention. So now we have to create one other latest benchmark, and the cycle starts over each time contamination occurs.”

To combat this, LiveBench publishes latest questions every month that help minimize potential contamination of test data. These questions are based on recently published datasets and math competitions, arXiv papers, news articles, and IMDb movie summaries. Because each query has a verifiable and objective answer based on truth, it may be accurately and routinely graded without LLM judges. There are currently 960 questions available, with newer and tougher questions being published every month.

Tasks and categories

A primary set of 18 tasks within the six categories above is out there today. These are tasks that “use a constantly updated source of knowledge for his or her questions” or are “more sophisticated or diverse versions of existing benchmark tasks” resembling those from AMPS, Big-Bench Hard, IFEval or bAbl. Here is the breakdown of tasks by category:

  • Mathematics: Questions from highschool math competitions from the last 12 months in addition to tougher versions of AMPS questions
  • Encoding: Code generation and a novel code completion task
  • Argumentation: difficult versions of Big-Bench Hards Web of Lies and positional considering from bAbl and Zebra Puzzles
  • Language comprehension: three tasks with Connections word puzzles, a task to correct typos and a task to decipher movie summaries from current movies featured on IMDb and Wikipedia
  • Instructions follow: 4 tasks to rewrite, simplify, summarise or generate stories about recent articles from The Guardian, adhering to requirements resembling word limits or including specific elements in the reply
  • Data evaluation: three tasks using current data sets from Kaggle and Socrata, namely reformatting tables, predicting which columns could be used to affix two tables, and predicting the right type annotation of a knowledge column

Each task has a distinct level of difficulty, from easy to very difficult. The idea is that top models are likely to have successful rate of 30 to 70 percent.

LiveBench LLM leaderboard as of June 12, 2024.

The benchmark's developers say they evaluated many “outstanding closed-source models in addition to dozens of open-source models” ranging in size from 500 million to 110 billion tokens. Citing LiveBench's difficulty level, they claim that the highest models achieved an accuracy of lower than 60 percent. For example, OpenAI's GPT-4o, which tops the benchmark's leaderboard, has a world average rating of 53.79, followed by GPT-4 Turbo with 53.34. Anthropic's Claude 3 Opus is in third place with 51.92.

What it means for the corporate

Business leaders have a tough time excited about learn how to leverage AI and use the technology to develop a solid strategy. Asking them to decide on the best LLMs creates unnecessary stress. Benchmarks can provide some assurance that models are performing exceptionally well—just like product reviews. But are executives getting a full view of what's under the hood?

“Navigating all the several LLMs is a big challenge, and there’s unwritten knowledge about which benchmark numbers are misleading resulting from impurities, which LLM assessment scores are extremely biased, etc.,” Goldblum explains. “LiveBench makes comparing models easy since you don't must worry about these issues. Different LLM use cases would require latest tasks, and we see LiveBench as a framework that ought to help other scientists create their very own assessments down the road.”

Comparing LiveBench with other benchmarks

Claiming to have a greater evaluation standard is one thing, but how does it compare to Benchmarks utilized by the AI ​​industry for a while? The team investigated this and saw how LiveBench's rating matched outstanding LLM benchmarks, namely LMSYS' Chatbot Arena and Arena-Hard. It appears that LiveBench showed “generally similar” trends to its industry peers, although some models “were significantly stronger on one benchmark than the opposite, potentially suggesting some drawbacks to the LLM assessment.”

Bar chart comparing LiveBench and ChatBot Arena results for a similar models. Image credit: LiveBench
Bar chart comparing LiveBench and Arena-Hard results for the same models. Surprisingly, GPT-4 models perform significantly better on Arena-Hard compared to LiveBench, which may be due to the known bias from using GPT-4 itself as the evaluation criteria. Image credit: LiveBench
Bar chart comparing LiveBench and Arena-Hard results for a similar models. Surprisingly, GPT-4 models perform significantly higher on Arena-Hard in comparison with LiveBench, which could also be resulting from the known bias from using GPT-4 itself because the evaluation criteria. Image credit: LiveBench

While these benchmarks show which models perform best, the evaluation of every LLM differs. And this metric isn't exactly a direct comparison either. As LiveBench notes, this may very well be resulting from unknown aspects resembling “known biases.” For example, OpenAI's GPT-4-0125-preview and GPT-4 Turbo-2024-04-09 performed significantly higher on Arena-Hard in comparison with LiveBench, but this is alleged to be “resulting from the known bias from using GPT-4 itself because the LLM evaluator.”

When asked if LiveBench is a startup or just a benchmark for the masses, Dooley replies that it’s “an open source benchmark that anyone can use and contribute to. We plan to maintain it going by releasing more questions every month. We also plan so as to add more categories and tasks in the approaching months to expand our ability to evaluate LLMs as their skills change and adapt. We are all big fans of open science.”

“We imagine that exploring the capabilities of LLMs and choosing a high-performing model is a crucial a part of developing an LLM-focused product,” says White. “Proper benchmarks are mandatory and LiveBench is a giant step forward. In addition, good benchmarks speed up the technique of developing good models.”

Developers can download LiveBench's Code from GitHub and be Records for Hugging Face.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read