A future with machines
Should we be concerned about a future alongside AI? Does the future hold productivity gains or mass employment? We are only just starting to see how this new technology will affect our particular line of work.
As an academic editor, it is encouraging to see how many of my colleagues have identified AI’s limitations. Large international scientific editing services have shown how even basic AI tools can get it wrong. Still, it is clear that many writers now use AI tools to edit their work. Inevitably, they are becoming less reliant on human editors. Some of us have seen workloads diminish or clients become less willing to pay the usual rates.
Will editors ever be completely replaced?
When discussing this, I am often reminded of the self-driving car: a technology that, in theory, has been available for many years. Spectacular advances were made over short time frames – just as we have seen with AI. They were predicted to be in widespread use by 2018. The reality is that the technology rapidly got 95% of the way there – but then stalled for quite a while before self-driving cars became a reality.
The reason may well be that a driverless car needs to work 100% effectively all of the time. The consequences when it does not would be disastrous. An AI editing tool does not need to be as perfect. If it is 95% of the way towards producing flawless text, it is still an improvement. Are we going to have future clients who need just the final polish to AI-enhanced documents? This suggests smaller workloads and lower rates.
A recent large survey of science writers from Asia produced interesting results. Over 90% were aware of AI tools. Of these, 50% regularly used them for tasks, such as grammar corrections/proofreading, rewriting and drafting abstracts. However, there were still concerns over the need for fact-checking and a general lack of trust in the technology. We are moving towards a future where these tools are used more widely by writers. We are not there yet but as technology advances further, those writers may no longer value the human touch at all.
Working for AI
We see the emergence of adverts for editors and writers to work in AI training. Silicon Valley needs your help to train the next generation of AI. Specifically, they need people with a good command of English to test AI prompts, edit responses and write new content. I have seen adverts advising me not to wait for AI to take my job but to participate actively. The intention is to use my expertise to train the machines. The work can all be done remotely. It can be a side hustle.
The latest such company appears to be Outlier.ai. They are advertising heavily and their reviews are posted widely: most of these are not very good, and include complaints of late or missing payments for completed work. Despite this, out of curiosity, I approached them.
Within 15 minutes of sending them my CV, I was accepted into the programme. Possibly a large red flag? The next hour was spent setting up a profile, confirming my identity and reading their training modules. I did get access to a dashboard of comments from their community. After logging out for the evening, I was sent an email link to an English reasoning assessment that I had to pass before being considered for jobs.
The following morning I logged on to begin the test. Initially, I was given another test to confirm my general knowledge of e-security. Then I was asked to confirm my cell number for SMS updates of new jobs. Following this I clicked on the test link. After a few seconds, a pop-up informed me that I was ‘ineligible for assignments’ and ‘we are not accepting appeals’.
So is it a scam?
On reflection, the swift acceptance followed by a series of automated tasks, culminating in a sudden disqualification without explanation, raises significant concerns about the legitimacy and integrity of the operation. Is it just a sting to harvest personal data? I am not alone in believing this is a scam. To me, the lack of human interaction throughout the process, coupled with the abrupt and impersonal outcome, strongly suggests a potential scam. At the very least, it is a misleading and time-wasting endeavour. My first and only interaction with an AI tech company has left me more determined than ever to engage in meaningful collaborations with real people, where trust, respect and authenticity prevail.
751 words