The science fiction literary magazine Clarkesworld will probably be published in 2023 stopped accepting recent submissions because so many were generated by artificial intelligence. As far as editors could tell, many submitters plugged the magazine's detailed story guidelines into an AI and sent the outcomes. And they weren't alone. Other fiction magazines have this too reported a high number of AI-generated submissions.
This is only one example of a pervasive trend. An ancient system relied on the issue of writing and perception to limit volume. Generative AI overwhelms the system since the people on the receiving end cannot sustain.
This happens in all places. Newspapers are flooded with AI-generated news Letters to the editoras they’re scientific journals. Lawmakers are inundated with AI-generated data Voter comments. Courts world wide are being inundated with AI-generated lawsuits Submissionsespecially from individuals who represent themselves. AI conferences are flooded with AI-generated research. Social media Is flooded with The scourge of AI. In Music, Open source software, Training, investigative journalism And setit's the identical story.
Like Clarkesworld's initial response, a few of these institutions suspended their submission processes. Others responded to the offensive AI inputs with a defensive response, often involving countervailing use of AI. Academic Peer reviewers are increasingly using AI to judge work that will have been created by AI. Social media platforms are reaching out AI moderator. Court systems use AI to Triage and process Process volumes increased by AI. Employers are reaching out AI tools to review applications from candidates. Educators don't just use AI to… Music papers And Manage examsbut as Return message Tool for college students.
These are all arms races: rapid, controversial iterations to make use of a standard technology for opposing purposes. Many of those arms races clearly have harmful effects. Society suffers when the courts are overloaded with frivolous cases fabricated by AI. There can be harm when the established measures of educational performance—publications and citations—profit the researchers most willing to fraudulently submit AI-written letters and papers, relatively than those whose ideas have the best impact. There is a fear that fraudulent behavior enabled by AI will ultimately undermine the systems and institutions that society relies on.
Benefits of AI
Still, a few of these AI arms races have surprising hidden advantages, and there’s hope that not less than some institutions will have the opportunity to alter in ways in which make them stronger.
Science appears to be getting stronger because of AI, nevertheless it faces an issue when AI makes mistakes. Consider the instance of nonsensical AI-generated wording filtering in academic papers.
A scientist using AI to help in writing a scientific paper could be an excellent thing if used rigorously and openly. AI is increasingly a primary tool in scientific research: for literature research, programming and for coding and analyzing data. And for a lot of, it has develop into a vital support for expression and scholarly communication. Before AI, better-funded researchers could hire people to assist them write their scientific papers. For many authors whose primary language shouldn’t be English, engaging such a assistance has been costly need. AI offers it to everyone.
In fiction, fraudulently submitted AI-generated works are causing harm, each to human authors, who now face increased competition, and to readers, who may feel cheated after unwittingly reading a machine's work. However, some media outlets may welcome AI-powered submissions with appropriate disclosure and under certain guidelines and use AI to judge them based on criteria equivalent to originality, fit and quality.
Others may reject AI-generated work, but that can come at a value. It is unlikely that a human editor or technology can maintain the power to differentiate human from machine writing. Instead, publishers who need to publish exclusively with humans must limit submissions to a gaggle of authors they trust not to make use of AI. When these policies are transparent, readers can select the format they like and browse easily in a single or each forms of outlets.
We also don't see an issue if a job seeker uses AI to shine their resume or write higher cover letters: the wealthy and privileged have long had access to human help for these items. But it crosses the road when AIs are used to it lie about identity and experience, or about it cheat during interviews.
Likewise, a democracy requires that its residents have the opportunity to precise their opinions to their representatives or to one another through a medium equivalent to the newspaper. The wealthy and powerful have long had the power to rent writers to translate their ideas into compelling prose, and in our view, AI offering that help to more people is an excellent thing. This is where AI errors and bias could be harmful. Citizens may not only be using AI for a time-saving shortcut; It may increase their knowledge and skills and generate statements about historical, legal or political aspects that they can not reasonably be expected to independently confirm.
Fraud enhancer
What we don't want is lobbyists using AIs in astroturf campaigns, writing multiple letters and passing them off as individual opinions. That too is one older problem that AIs make all the pieces worse.
What separates the positive from the negative here shouldn’t be an inherent aspect of the technology, but relatively the performance dynamics. The same technology that reduces the trouble required for a citizen to share their lived experiences with their legislators also allows corporate interests to misrepresent the general public on a large scale. The former is a power-balancing application of AI that strengthens participatory democracy; The latter is an application of power concentration that threatens it.
In general, we consider that writing and cognitive support, long available to the wealthy and powerful, needs to be available to everyone. The problem arises when AI makes fraud easier. Any response must strike a balance between embracing this recent democratization of access and stopping fraud.
There isn’t any solution to turn off this technology. High-performance AIs are widely available and may run on a laptop. Ethical guidelines and clear skilled boundaries may help – for individuals who act in good faith. But there won’t ever be a solution to completely stop academic writers, job seekers, or residents from using these tools, whether as legitimate help or to commit fraud. That means more comments, more letters, more applications, more submissions.
The problem is that whoever is on the receiving end of this AI-driven flood can't handle the increased volume. What may help is developing supportive AI tools that profit institutions and society while curbing fraud. And that will mean counting on using AI support in these adversarial systems, even when the defensive AI won’t ever achieve dominance.
Weigh the harm and advantages
The science fiction community has been wrestling with AI since 2023. Clarkesworld eventually reopened submissions. claim that there’s an appropriate solution to separate stories written by humans and AI. Nobody knows how long and the way well it will work.
The arms race continues. There isn’t any easy solution to say whether the potential advantages of AI will outweigh the disadvantages now or in the longer term. But as a society, we are able to influence the balance between the harm it causes and the opportunities it presents as we navigate the changing technology landscape.

