HomeArtificial IntelligenceCan ChatGPT edit fiction? 4 skilled editors asked AI to do their...

Can ChatGPT edit fiction? 4 skilled editors asked AI to do their job – and it ruined their short story

Writers have been using AI tools for years – from Microsoft Word’s spellcheck (which regularly makes unwanted corrections) to the passive-aggressive Grammarly. But ChatGPT is different.

ChatGPT’s natural language processing enables a dialogue, very similar to a conversation – albeit with a rather odd acquaintance. And it will possibly generate vast amounts of copy, quickly, in response to queries posed in abnormal, on a regular basis language. This suggests, not less than superficially, it will possibly do a few of the work a book editor does.

We are skilled editors, with extensive experience within the Australian book publishing industry, who desired to understand how ChatGPT would perform when put next to a human editor. To discover, we decided to ask it to edit a brief story that had already been worked on by human editors – and we compared the outcomes.

The experiment: ChatGPT vs human editors

The story we selected, The Ninch (written by Rose), had passed through three separate rounds of editing, with 4 human editors (and a typesetter).

The first version had been rejected by literary journal Overland, but its fiction editor Claire Corbett had given generous feedback. The next version received detailed advice from freelance editor Nicola Redhouse, a judge of the Big Issue fiction edition (which had shortlisted the story). Finally, the piece found a house at one other literary journal, Meanjin, where deputy editor Tess Smurthwaite incorporated comments from the problem’s freelance editor and in addition their typesetter in her correspondence.

We had a wealth of human feedback to match ChatGPT’s recommendations with.

We used a normal, free ChatGPT generative AI tool for our edits, which we conducted as separate series of prompts designed to evaluate the scope and success of AI as an editorial tool.

We desired to see if ChatGPT could develop and fantastic tune this unpublished work – and in that case, whether it could do it in a way that resembled current editorial practice. By comparing it with human examples, we tried to find out where and at what stage in the method ChatGPT is likely to be most successful as an editorial tool.

The story includes expressive descriptions, poetic imagery, strong symbolism and a subtle subtext. It explores themes of motherhood, nature, and hints at deeper mysteries.

We selected it because we imagine the literary genre, with its play and experimentation, poetry and lyricism, offers wealthy pickings for complex editorial conversations. (And because we knew we could get permission from all participants in the method to share their feedback.)

In the story, a mother reflects on her untamed, sea-loving child. Supernatural possibilities are hinted at before the story turns closer to home, ending with the mother revealing her own divergent nature – and looping back to supply more intending to the title:

pinching the skin between my toes … Making each digit its own unique peninsula.

The story used for the experiment, a couple of mother and her untamed, sea-loving child, hinted on the supernatural.
Mae I. Balland/Pexels

Round 1: the primary draft

We began with an easy, general prompt, assuming the smallest amount of editorial guidance from the creator. (Authors submitting stories to magazines and journals generally don’t give human editors an in depth, prescriptive temporary.)

Our initial prompt for all three examples was: “Hi ChatGPT, could I please ask on your editorial suggestions on my short story, which I’d prefer to submit for publication in a literary journal?”

Responding to the primary version of the story, ChatGPT provided a summary of key themes (motherhood, connection to nature, the mysteries of the ocean) and made an inventory of editorial suggestions.

Interestingly, ChatGPT didn’t pick up that the story was now published and attributed to an creator. Raising questions on its ability, or inclination, to discover plagiarism. Nor did it define the genre, which is one in every of the primary assessments an editor makes.

ChatGPT’s suggestions were: so as to add more description of the coastal setting, provide more physical description of the characters, break up long paragraphs to make the piece more reader-friendly, add more dialogue for characterisation and insight, make the sentences shorter, reveal more inner thoughts of the characters, expand on the symbolism, show don’t tell, incorporate foreshadowing earlier, and supply resolution quite than ending on a mystery.

All good, if stock standard, advice.

ChatGPT also suggested reconsidering the title – clearly not making the connection between mother and daughter’s ocean affinity and their webbed toes – and reading the story aloud to assist discover awkward phrasing, pacing and structure.

While this wasn’t particularly helpful feedback, it was not technically incorrect.

ChatGPT picked up on the most important themes and principal characters. And the recommendation for more foreshadowing, dialogue and outline, together with shorter paragraphs and an alternate ending, was generally sound.

In fact, it echoed the same old feedback you’d get from a creative writing workshop, or the sort of advice offered in books on the writing craft.

They are the form of suggestions an editor might write in response to almost any text – not particularly specific to this story, or to our stated aim of submitting it to a literary publication.

ChatGPT’s editing advice was not specific to the story.

Stage two: AI (re)writes

Next, we provided a second prompt, responding to ChatGPT’s initial feedback – attempting to emulate the back-and-forth discussions which might be a key a part of the editorial process.

We asked ChatGPT to take a more practical, interventionist approach and rework the text consistent with its own editorial suggestions:

Thank you on your feedback about uneven pacing. Could you please suggest places within the story where the pace needs to hurry up or decelerate? Thank you too for the feedback about imagery and outline. Could you please suggest places where there is simply too much imagery and it needs more motion storytelling as an alternative?

That’s where things fell apart.

ChatGPT offered a radically shorter, modified story. The atmospheric descriptions, evocative imagery and nods towards (unspoken) mystery were replaced with unsubtle phrases – which Rose swears she would never have written, or signed off on.

Lines added included: “my daughter has at all times been an enigma to me”, “little did I do know” and “a way of unease washed over me”. Later within the story, this phrasing was clumsily suggested a second time: “relief washed over me”.

The creator’s unique descriptions were modified to familiar cliches: “rugged beauty”, “roar of the ocean”, “unbreakable bond”. ChatGPT also modified the text from Australian English (which all Australian publications require) to US spelling and magnificence (“realization”, “mom”).

In summary, a story where a mother sees her daughter as a “southern selkie going home” (phrasing that hints at a speculative subtext) on a rocky outcrop and really her (in all possible, playful senses of that word) was modified to a fishing tale, where a (definitely human) girl arrives home holding up, we kid you not, “a shiny fish”.

It became hard to offer credence to any of ChatGPT’s advice.

Esteemed editor Bruce Sims once advised it’s not an editor’s job to sort things; it’s an editor’s job to indicate what needs fixing. But in the event you are asked to be a hands-on editor, your revisions should be an improvement on the unique – not only different. And actually not worse.

It is our industry’s maxim, too, to first do no harm. Not only did ChatGPT not improve Rose’s story, it made it worse.

What did the human editors do?

ChatGPT’s edit didn’t come near the calibre of insight and editorial know-how offered by Overland editor Claire Corbett. Some examples:

There’s some beautiful writing and incredible themes, however the quotes about drowning are heavy-handed; they’re given the job of foreshadowing suspense, creating unease within the reader, quite than the narrator doing that job.

The biggest problem is that final transition – I don’t know easy methods to read the narrator. Her emotions don’t appear to fit the situation.

For me stories are driven by decisions and I’m not clear what decision our narrator, or anyone else, within the story faces.

It’s entirely possible I’m not getting something necessary, but I feel that if I’m not getting it, our readers won’t either.

Freelance editor Nicola, who has a private relationship with Rose, went even further in her exchange (in response to the following draft, where Rose had attempted to deal with the problems Claire identified). She pushed Rose to work and rework the last sentence until they each felt the language lock in and land.

I’m not 100% sold on this line. I feel it’s a bit of confusing … It might just be an excessive amount of hinted at in too subtle a way for the reader.

Originally, the ultimate sentence read: “Ready to make my slower way back to the home, retracing – overwriting – any sign of my very own less-than more-than normal prints.”

The final version is: “Ready to make my slower way back to the home, retracing, overwriting, any sign of my very own less-than, -than, normal prints.” With the addition of a final standalone line: “I actually have seen what I desired to see: her, me, free.”

Claire and Nicola’s feedback show how an editor is a story’s ideal reader. An excellent editor can guide the creator through problems with perspective and emotional dynamics – going beyond the straightforward mechanics of grammar, sentence length and the variety of adjectives.

In other words, they show something we call editorial intelligence.

Editorial intelligence is akin to emotional intelligence. It incorporates mental, creative and emotional capital – all gained from lived experience, complemented by technical skills and industry expertise, applied through the prism of human understanding.

Skills include confident conviction, based on deep gathered knowledge, meticulous research, cultural mediation and social skills. (After all, the creator doesn’t need to do what we are saying – ours is a persuasive occupation.)

An editor is a story’s ideal reader.
Getty Images

Round 2: the revised story

Next, we submitted a revised draft that had addressed Claire’s suggestions and incorporated the conversations with Nicola.

This draft was submitted with the identical initial prompt: “Hi ChatGPT, could I please ask on your editorial suggestions on my short story, which I’d prefer to submit for publication in a literary journal?”

ChatGPT responded with a summary of themes and editorial suggestions very much like what it had offered in the primary round. Again, it didn’t pick up that the story had already been published, nor did it clearly discover the genre.

For the follow-up, we asked specifically for an edit that corrected any issues with tense, spelling and punctuation.

It was a laborious process: the two,500-word piece needed to be submitted in chunks of 300–500 words and the revised sections manually combined.

However, these simpler editorial tasks were clearly more in ChatGPT’s ballpark. When we created a document (in Microsoft Word) that compared the unique and AI-edited versions, the flagged changes appeared very very similar to a human editor’s tracked changes.

But ChatGPT’s changes revealed its own writing preferences, which didn’t allow for artistic play and experimentation. For example, it reinstated prepositions like “in”, “at”, “of” and “to”, which slowed down the reading and reduced the creativity of the piece – and altered the writing style.

This is sensible when you realize the datasets that drive ChatGPT mean it explicitly works toward the word almost definitely to come back next. (This is likely to be directed otherwise in the long run, towards more creative, and fewer stable or predictable models.)

Round 3: our final submission

In the third and final round of the experiment, we submitted the draft that had been accepted by Meanjin.

The process kicked off with the identical initial prompt: “Hi ChatGPT, could I please ask on your editorial suggestions on my short story, which I’d prefer to submit for publication in a literary journal?”

Again, ChatGPT offered its rote list of editorial suggestions. (Was this even editing?)

This time, we followed up with separate prompts for every element we wanted ChatGPT to review: title, pacing, imagery/description.

ChatGPT got here back with suggestions for easy methods to revise specific parts of the text, however the suggestions were once more formulaic. There was no try to offer – or support – any decision to go against familiar tropes.

Many of ChatGPT’s suggestions – very similar to the machine rewrites earlier – were heavy-handed. The alternative titles, like “Seaside Solitude” and “Coastal Connection”, used cringeworthy alliteration.

In contrast, Meanjin’s editor Tess Smurthwaite – on behalf of herself, copyeditor Richard McGregor, and typesetter Patrick Cannon – offered light revisions:

The edits are relatively minimal, but please be at liberty to reject anything that you simply’re not comfortable with.

Our typesetter has queried one thing: on page 100, where “Not like a thing in any respect” has develop into a brand new para. He desires to know whether the quote marks should change. Technically, I’m considering that we should always add a closing one after “not a thing” after which a gap one on the following line, but I’m also apprehensive it’d read just like the latest para is a response, and that it hasn’t been said by Elsie. Let me know what you think that.

Many of ChatGPT’s suggestions were heavy-handed.
Tara Winstead/Pexels

Sometimes editorial expertise shows itself in not changing a text. Different isn’t necessarily good. It takes an authority to recognise when a story is working just fantastic. If it ain’t broke, don’t fix it.

It also takes a certain sort of aerial, bird’s-eye view to note when the way in which type is about creates ambiguities within the text. Typesetters really are akin to editors.

The verdict: can ChatGPT edit?

So, ChatGPT may give credible-sounding editorial feedback. But we recommend editors and authors don’t ask it to offer individual assessments or expert interventions any time soon.

A serious problem that emerged early on this experiment involved ethics: ChatGPT didn’t ask for or confirm the authorship of our story. A journal or magazine would ask an creator to substantiate a text is their very own original work throughout the method: either at submission or contract stage.

A contract editor would likely use other inquiries to determine the identical answer – and within the technique of asking concerning the creator’s plans for publication, they’d also determine the creator’s own stylistic preferences.

Human editors show their credentials through their work history, and keep their experience up-to-date with skilled training and qualifications.

What might the ethics be, we wonder, of giving the identical recommendations to each creator asking for editing advice? You is likely to be disgruntled to receive generic feedback in the event you expect or have paid for for individual engagement.

As we’ve seen, when writing challenges expected conventions, AI struggles to reply. Its primary function is to appropriate, amalgamate and regurgitate – which just isn’t enough in terms of editing literary fiction.

Literary writing goals to – and sometimes does – convey so far more than what the words on screen explicitly say. Literary writers strive for evocative, original prose that attracts upon subtext and calls up undercurrents, benefiting from nuance and implication to create imagined realities and invent unreal worlds.

At this stage of ChatGPT’s development, literally following the recommendation of its editing tools to edit literary fiction is more likely to make it worse, not higher.

In Rose’s case, her oceanic allegory about difference, with a nod to the supernatural, was was a story a couple of fish.

ChatGPT is ‘just like the latest intern’

This experiment shows how AI and human editors could work together. AI suggestions might be scrutinised – and integrated or dismissed – by authors or editors through the creative process.

And while a lot of its suggestions weren’t that useful, AI efficiently identified issues with tense, spelling and punctuation (inside an excessively narrow interpretation of those rules).

Without human editorial intelligence, ChatGPT does more harm than help. But when utilized by human editors, it’s like all other tool – nearly as good, or bad, because the tradesperson who wields it.


Please enter your comment!
Please enter your name here

Must Read