HomeNewsAI-assisted writing is booming in academic journals. Here's why that's okay

AI-assisted writing is booming in academic journals. Here's why that's okay

If you search Google Scholar for the term “as an AI language model”, you can see quite a lot of AI research literature and likewise some fairly suspicious results. For example one Paper as regards to agricultural technology says:

Obvious gaffes like this aren't the one signs that researchers are increasingly turning to generative AI tools to write down their research. A Recent study examined the frequency of certain words in academic writing (e.g., “commendable,” “meticulous,” and “complicated”) and located that they became much more common after the introduction of ChatGPT – a lot in order that 1% of all journal articles published in 2023 could have contained AI-generated text.

(Why do AI models overuse these words? There is speculation This is because they’re more common in English than in Nigeria, where key elements of model training are common.)

The aforementioned study also examines preliminary data from 2024 that implies AI writing assistance is becoming increasingly common. Is this a crisis for contemporary science or a boon for tutorial productivity?

Who must be credited with AI writing?

Many individuals are concerned concerning the use of AI in scientific work. In fact, the practice has been described as “contaminating“scientific literature.

Some argue that using AI output amounts to plagiarism. If your ideas have been copied from ChatGPT, it's questionable whether you actually deserve credit for them.

However, there are essential differences between “plagiarizing” texts written by humans and texts written by AI. Those who plagiarize people's work get credit for ideas that ought to have gone to the unique writer.

In contrast, it’s questionable whether AI systems like ChatGPT can contribute ideas, let alone deserve recognition for them. An AI tool is more like your phone's autocomplete feature than a human researcher.

The query of bias

Another concern is that the AI ​​results could possibly be biased in ways that might intrude on the scientific record. Notoriously older language models tended to display People who’re female, black and/or gay are treated in significantly unfavorable ways in comparison with people who find themselves male, white and/or heterosexual.

This kind of bias is less pronounced in the present version of ChatGPT.

However, other studies got here to a unique conclusion Art from Bias in ChatGPT and other major language models: a bent to reflect a left-liberal political ideology.

Such bias could subtly distort the scientific work produced using these tools.

The hallucination problem

The biggest concern concerns a well known limitation of generative AI systems: that they often make serious errors.

For example, after I asked ChatGPT-4 to generate an ASCII image of a mushroom, I got the next output.

   .--'|
   /___^ |     .--.
       ) |    /    
      / |   |      |
     |   `-._    /
             `~~`
      `-..._____.-`

Then it confidently told me that I could use this image of a “mushroom” for my very own purposes.

Such high-spirited errors have been called “AI hallucinations” and “AI hallucinations.”AI bullshit“. While it's easy to see that the ASCII image above doesn't appear to be a mushroom in any respect (but more like a snail), it might be much harder to identify any mistakes ChatGPT makes in the method Overview of scientific literature or describe the state of a philosophical debate.

Unlike (most) people, AI systems fundamentally don’t care concerning the truth of their statements. If used carelessly, their hallucinations could distort the scientific record.

Should texts created by AI be banned?

One response to the rise of text generators has been to ban them altogether. For example, Science – one of the vital influential scientific journals on the earth – prohibits it any use of AI-generated text.

I see two problems with this approach.

The first problem is practical: current tools for recognizing AI-generated text are extremely unreliable. This includes the detector created by ChatGPT's own developers taken offline after discovering that the accuracy rate was only 26% (and a 9% false positive rate). People too make mistakes when assessing whether something was written by AI.

It can also be possible to bypass AI text detectors. Online communities are actively explore tips on how to prompt ChatGPT in a way that enables the user to avoid detection. Human users may also superficially rewrite AI output, effectively eliminating AI's traces (e.g., excessive use of the words “commendable,” “careful,” and “complicated”).

The second problem is that a complete ban on generative AI will prevent us from reaping the advantages of those technologies. Generative AI may be used well increase academic productivity by optimizing the writing process. In this manner, it could help expand human knowledge. Ideally, we should always attempt to benefit from these advantages while avoiding the issues.

The problem is poor quality control, not AI

The most major problem with AI is the chance of introducing unnoticed errors that result in sloppy scientific work. Instead of banning AI, we should always attempt to be certain that false, implausible or biased claims don’t enter the educational record.

After all, people may also write texts with serious errors and mechanisms reminiscent of peer review often fail to stop its publication.

We must improve at ensuring that scientific papers are free of great errors, whether those errors are brought on by careless use of AI or sloppy human science. Not only is that this easier to attain than monitoring AI use, it can also improve the standards of educational research overall.

This could be (as ChatGPT would say) a laudable and intensely complex solution.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read