Skip to content

AI won’t take this copyeditor’s job just yet

Copyeditor wants your attention about generative AI
© Sue Littleford 2023

AI. Aren’t you just so fed up of hearing people whine about it? But if you’re involved in words – lining them up in your own glorious prose, or editing them – I’m afraid you can’t stick your head in the sand and hope it’ll all be fine.

Most of us recognise the name ChatGPT. That’s the generative AI program that’s been grabbing the headlines since it was launched on the unsuspecting public on 30 November 2022. There are many others. This post lists twenty, as at June 2023. Since then new versions of some of those have come out; doubtless more are about to launch or are in the making.

What’s wrong with generative AI?

Since autumn 2022, ChatGPT has been notorious for, well, inventing stuff. Let’s not sugar-coat it: it’s been telling lies and bringing down reputations. Now it can access the internet, so, how’s it going to tell the difference between truth, error and downright lies there? Human beings struggle and we have reason and intuition on our side.

The big cheat with generative AI is that it sounds human – it writes plausibly in natural language. So the unwary are lulled into a very false sense of security.

Editor chum Lisa Cordaro blogged on generative AI, unpicking the far-reaching consequences of allowing a bot to replace a human.

Generative AI is a tool.

Generative AI is a tool.

It is not an answer.

It’s only as good as the human that wields it.

Consider: if you had a talkative hammer that assured you it could also drill and glue and cut and fix screws, would you give it all your money and walk away, expecting to come home to a newly built house, the house of your dreams?

Didn’t think so.

An editor colleague asked Bing’s AI if a piece of work was plagiarised. Yes, it said, definitely. When pushed, it finally gave the location of the original text. The original text was dug out of the interwebs. The plagiarised text wasn’t there – Bing AI is quite happy to accuse people of plagiarism without grounds.

It’s not just unfortunate that the technology isn’t reliable. It is inflicting harm.

Testing generative AI’s usefulness

As I write the first draft of this post (late October 2023), I’m mulling over the webinar I went to this morning on generative AI and how it can help businesses to grow.

The most interesting reaction was that of the audience filling the chat – people saying, quite bluntly, that AI is not yet fit for the purposes people are trying to put it to. Even the presenter, pro-AI to a large degree, kept saying that you have to check the output.

Well, of course you do.

Just because generative AI wrote it doesn’t make it good, or right, or fit for purpose, or even worth reading at all.

The first demonstration was having ChatGPT 4 (the paid version) write a limerick on a given theme. Well – the limerick had the right number of lines and the right rhythm, but the words were gobbledegook.

I’ve found a lot of that with ChatGPT’s output – it looks OK on the surface but start to dig a little and… nope. Generative AI is basically souped-up predictive text, and we all know how wrong that can go! Admittedly, it’s very souped-up. What it’s doing is checking large amounts of what other people have written and mimicking it. (Sometimes this content is stolen – let’s not forget the forest of copyright issues this has also led to. Generative AI generates more than just dodgy text!)

It also does it with images – scrapes images and makes very generic logos and illustrations, but based on what it’s seen. So I was heartened this same week to learn that the fight-back there has also begun, with the start of code to ‘poison’ artwork so AI doesn’t see what’s actually there, thus preventing an artist from having their work ripped off without consent. Brilliant idea. Keep an eye on this development!

There’s a function in ChatGPT in which you can upload a spreadsheet and ask the bot to analyse the figures. We were warned in the webinar that we would, of course, have to check the interpretations carefully, because ChatGPT is – let’s call it unreliable. I’m feeling momentarily charitable.

But just think about this: we’re expected to upload commercially sensitive data to an organisation known to annex your intellectual property for the dubious benefit of getting an analysis that can’t be trusted even as an analysis, let alone as an ethical one?

No thanks.

So – what is generative AI good for?

With needle-sharp accuracy of prompts, generative AI can be a helpful tool in getting a writer over the perennial problem of blank-page paralysis. It can give you something to demolish and so get your thought processes going. It can provide ideas for blogs (bad ones, in my experience). It can do a lot of stuff. Some things will be better than others.

I just asked ChatGPT 3.5 what I should include in an annual report for my business (not that I’m required to write one), and it came up with a good list of twenty items. I’d want to check that list against the legal requirements for a company annual report for my jurisdiction, mind! It looks plausible (that word again) but I can’t trust that it’s right.

The time you save in writing a first draft or in coming up with original ideas or checking an authoritative source for the requirements for an annual report, is now lost. Once generative AI has produced some text, you’re instead spending your time combing over and over what the bot has produced – for language, for tone, for sheer accuracy of facts, competence of drafting, for fitness for the audience, for whether the intended message is coming across. As you’ve not done the thinking yourself, do you even know whether the writing is comprehensive?

Oh, and ChatGPT can only write in US English. Too bad, rest of the world. I once asked it to rewrite in UK English some text it had produced. It just shoved the word ‘UK’ into the text a few times. Sigh.

I just ran this experiment again – it still can’t keep to a wordcount, something I discovered early on. It could translate a passage into French (no idea whether it was an acceptable translation!) but when asked to then rewrite using UK English it carried on with its US spelling – but at least it didn’t just add ‘UK’ all over the place!

No matter how souped-up the predictive text is, it doesn’t have originality, creativity, common sense, ingenuity, inquisitiveness, brainstorming, Spidey-sense. There’s no spark. And unless you are a wonderful prompt-writer, it has no ability to understand the breadth and depth of what you want to produce.

If you remember that generative AI is a tool – and (at present) a singularly unreliable tool – you may find uses for it. But you must never abdicate your responsibility to verify generative AI’s output. Not unless you want to find yourself the subject of a news story.

Never abdicate your responsibility to verify generative AI’s output

AI has no independent understanding of what you’re trying to achieve. It can’t apply reason, understand nuance, or account for context. The clearer and tighter your prompts are, the better result you will get, but it will still be generic AI text because that is all it has in its own toolkit, and everything it puts out will still need careful checking and endless modification.

Where does generative AI leave editors?

Depressed, pretty much. The people who don’t much worry about whether the writing is good will take what AI produces and run with it until they learn their lesson.

Those who are really good at writing prompts will get more out of it, and may therefore feel less bothered about having a professional eye on their text.

Editors have rarely wished more forcefully for a crystal ball – the degree of uncertainty about what generative AI means for us as a profession is painful.

Why will editors and proofreaders still be needed?

Writers and publishers who do care about a quality product will realise, I fervently hope, that they still need a human being at the editing helm:

  • a human being who is sensitive to nuance and context
  • a human being who knows when the conventions of language must be followed and where they can be bent
  • a human being who can anticipate the reader’s wish to have the information come in a more useful order, or can see what literary sparkle the author has created and honour that, even if that means breaking from a style guide
  • a human being who can hear the author’s voice and remain true to it whilst making edits.

This human being is perfectly happy using tools – I have several available to me. But none of them are the tail wagging the dog.

Bitmoji of Sue Littleford, copyeditor, announcing a true story, sitting at the fireside
© Sue Littleford 2022

I’m going to end on a funny story. Years ago – in the last millennium – I worked with a guy called Barrie. Not Barry, but Barrie. One of my staff had frequent occasion to email him. He was one of several people I knew who thought, bizarrely, that the very best thing to do was to take Word’s first suggestion when the dreaded red squiggly line surfaced. Because Word’s a computer thing, right? And so it’s always right, right?

Well, Word’s dictionary needs feeding. If you have an unusual spelling of a name, you have to tell Word to add it to its dictionary, or it will continue to draw the dreaded red squiggly line under a perfectly fine word.

I can’t tell you how many times Barrie would storm into my office, brandishing the printed-out email as evidence, shrieking, ‘He’s done it again!!!’ Because the first replacement Word offered for ‘Barrie’ wasn’t ‘Barry’. It was ‘barmy’. And after a while, mistakes like that are no longer funny – they become intolerable.

Good writing matters.

1 thought on “AI won’t take this copyeditor’s job just yet”

  1. Pingback: Draftsmith: a new writing and editing tool - review

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.