HomeNewsWhat the hyperproduction of AI scum is doing to science

What the hyperproduction of AI scum is doing to science

Over the past three years, generative artificial intelligence (AI) has had a profound impact on society. In particular, the influence of AI on human writing has been enormous.

The large language models that underlie AI tools like ChatGPT are trained on a wide range of text data and might now create complex and high-quality texts themselves.

Most importantly, the widespread use of AI tools has led to hyperproduction of so-called “AI slop”: low-quality AI-generated results created with minimal or no human effort.

Much has been said about what AI writing means for education, work and culture. But what about science? Does AI improve scientific writing, or does it just produce “scientific AI nonsense”?

Accordingly a brand new study by researchers at UC Berkeley and Cornell University, published in Science, the slop is the winner.

Generative AI increases academic productivity

The researchers analyzed abstracts from multiple million preprint articles (publicly available articles which have yet to be peer-reviewed) published between 2018 and 2024.

They examined whether using AI is related to higher academic productivity, manuscript quality, and using more diverse literature.

An writer's variety of preprints was a measure of his or her productivity, while eventual publication in a journal was a measure of an article's quality.

The study found that the variety of preprints an writer produced increased dramatically after they began using AI. Depending on the preprint platform, the whole variety of articles an writer published per thirty days after adopting AI increased between 36.2% and 59.8%.

The increase was biggest amongst non-native English speakers, particularly amongst Asian authors, where it ranged from 43% to 89.3%. For authors from English-speaking institutions and with “Caucasian” names, the rise was more modest, at 23.7% to 46.2%.

These results suggest that AI has been widely utilized by non-native speakers to enhance their written English.

How is the item quality?

The study found that, on average, articles written with AI used more complex language than those written without AI.

However, amongst articles written using artificial intelligence, those using more complex language were more more likely to be published.

This suggests that more complex and better quality texts are viewed as more scientifically helpful.

However, for articles written with AI assistance, this relationship was reversed – the more complex the language, the less likely the article was to be published. This suggests that complex language generated by AI was used to cover the low quality of the scientific work.

AI has increased the range of educational sources

The study also examined the differences in article downloads coming from search platforms Google and Microsoft.

Microsoft's Bing search engine introduced an AI-powered Bing Chat feature in February 2023. This allowed the researchers to check what variety of articles were really useful by the AI-powered search versus those by an everyday search engine.

Interestingly, Bing users were exposed to a greater number of sources than Google users, in addition to more moderen publications. This is probably going attributable to a method utilized by Bing Chat called Retrieval-Augmented Generation, which mixes search results with AI prompts.

In any case, fears that the AI ​​search would get “stuck” in recommending old, widely used sources were unfounded.

Go forward

AI has a big impact on academic writing and publishing. It has turn out to be an integral part of educational writing for a lot of scientists, especially non-native speakers, and can proceed to accomplish that in the longer term.

As AI is integrated into many applications similar to word processors, email apps, and spreadsheets, it can soon be not possible not to make use of AI, whether we prefer it or not.

Most importantly for science, AI challenges using complex, high-quality language as an indicator of scientific performance. Quickly reviewing and evaluating articles based on language quality is becoming increasingly unreliable and higher methods are urgently needed.

As complex language is increasingly used to cover up weak scientific contributions, critical and in-depth assessments of study methods and contributions are essential during peer review.

One approach is to “fight fire with fire” and use AI verification tools just like the one we recently used edited by Andrew Ng at Stanford. Given the ever-increasing variety of manuscript submissions and the already high workload of educational journal editors, such approaches often is the only viable option.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read