For around only SEK1 500 a year it is possible to subscribe to the artificial intelligence (AI) editing services that some reputable journal publishers offer nowadays. This seems like a brilliant way to save time and money on language editing services. But just how efficient are these tools? Can we just log in, upload our article, get it back edited in a few minutes and submit it to the journal?
AI editing tools: Are they effective?
The short answer is ‘No’. I recently tested a widely used and well-regarded AI editing tool provided by one of the most prestigious academic journal publishers. I selected a range of academic texts that had previously been sent to me for editing and uploaded them to the AI tool for editing. Here is what I found.
Initial impressions
The AI tool was useful for proofreading academic texts – either written by first-language English speakers or drafts of second-language English speakers that had already been language edited.
Where texts were more challenging and required substantive editing, the tool was helpful in correcting some, but not all, obvious grammatical and syntactical issues. And even then, it was inconsistent. Sometimes it returned texts with some basic language errors. Among other shortcomings, the tool was significantly challenged when the text was by authors writing in English as a second or third language. It is here that the AI tool was particularly weak. Generally, the lower the standard of English in the text, the less capable AI was of doing a good job and in no case did it shorten the time it would have taken me to edit the text without it.
Additional shortcomings
Below, I briefly list some of the shortcomings I encountered with the AI language editing tool, and comment on these.
Clarity
I was surprised that the tool was not more consistent at recognising and correcting misplaced punctuation, incorrect spelling, ambiguity in constructions, subject–verb agreement and parallel structure errors. The tool routinely failed to identify and amend repetition and also ‘misunderstood’ context – rewriting sentences to give them a completely different meaning.
An example of ‘misunderstanding’ is the following. The author wrote:
‘Even if it is voluntary nearly 65% of all children (ages 1–4) attend preschool and from the age of six years, 90% attend primary school.’
Because the author did not add a comma after ‘voluntary’, the AI ‘misunderstood’ and corrected the text to: ‘Even if it is voluntary for nearly 65% of all children (ages 1–4) to attend preschool and from the age of six years, 90% attend primary school.’
Not only was the author’s meaning ‘misunderstood’ but the resulting sentence did not make sense. The sentence correctly interpreted should read: ‘Although it is voluntary, nearly 65% of all children (ages 1–4) attend preschool, and from the age of six, 90% attend primary school.’
Context matters
Sometimes, authors submit rough drafts for me to ‘clean up’ because they know I can do it faster than they can. In such texts, careless errors that lead to confusing constructions are common. As the language editor, I can usually quickly ‘knock’ the offending sentences into shape because I can make sense of the context; I can often work out what the author means because I have read and understood the text. The AI tool was consistently unable to identify and correct such errors. Perhaps I should not be surprised, given that AI operates by using language pattern frequency, and cannot engage in sense-making.
Replicating nonsense
In a similar vein, the tool also replicated nonsense. The following sentence passed through the AI edit: ‘This paper addresses how assimilation I conceptualised in the Canadian policy of the integration of refugees, how the conceptualisation forms the practice of integration in Canada and the consequence of the practice for the individuals who are subjected to these practices in their successful inclusion in their occupations acquired in their country of origin.’ Would you be happy with this?
Relevance
One client had been instructed by a reviewer to avoid the terms train and training in certain contexts in their article. However, I could not simply find and replace the term with another. In some instances, training was the appropriate term but elsewhere it was inappropriate. Careful judgement and an understanding of the context were required by the language editor. Each instance where training was used had to be considered in the context and revisions made accordingly. The AI tool was not able to assess the contexts and appropriately replace train/training with, for example, rehearse, rehearse for, practice, etc.
Interpretation
In another case, early in their paper, an author identifies and names a theme that emerged in the data analysis. They continue to refer to this theme/term in the results and discussion, but the AI is unable to ‘remember’ how this term should be written as a theme and consistently and incorrectly edits the spelling and use of the term in the rest of the paper.
Editors are still needed
To summarise, once the AI tool has edited a text, I still need to carefully check and compare the edited article with the original text to ensure correct punctuation, use of italics, subject–verb agreement in specific contexts and to ensure that the author’s intended meaning was conveyed. Even with the best of texts I receive, I always have at least one query, sometimes many, about the authors’ intended meaning. I share these queries with the authors and there is dialogue until I am clear about the intended meaning, and we have agreed on the most appropriate way to convey what is meant.
If you are confident that your written academic English matches or surpasses your spoken English, then, of course, try one of the AI editing tools on offer. However, if you have found that language editors generally make substantial changes to your writing for publication, and return your texts with queries about intended meaning, then I suggest, for now anyway, you consider better ways to spend your SEK1Â 500.
984 words