AI writing detectors or AI writing dictators?
Humanity is entering an age of technology unlike any seen before: it would appear that the nuclear age has been followed by the age of artificial intelligence. AI voice models cover any song in your favourite artist’s voice and computer graphics render anyone you know saying or doing anything that you might be pleased to command. Provided, of course, you feed it enough source data. For now at least, the technological children that humanity has birthed remain a shadow of the original. We are yet to face the conundrum God probably did when He first created man in His own image. For now, AI is not so much Adam as it is Frankenstein. And as anyone who read the book may know, Frankenstein is a very difficult character to overestimate.
The first AI innovation that captured our attention not less than a year ago was the early language model versions that were made available to the public on the world wide web. With it came a flurry of concerns about the integrity of written content, be it academical, professional, or even personal. As AI language models get better and better at emulating human expression, many writers are questioning their job security, while others question their very identity. Sitting right in the mix of things adjudicating the face off between human ingenuity and artificial intelligence is none other than – AI detection tools. It would appear that the prophesied battle between robots and mankind has ended before it had even started.
There are many product reviews for artificial language modelling detectors for written texts for anyone caring to experiment. But how do they work? How can a piece of (albeit extremely complicated) software decide whether a sentence was written by a thinking, feeling human or a learning and growing language model?
To understand this, we must first understand what an AI language modelling tool is and how it works. And there can be some confusion about this: most see it as a search engine with extra frills. However, as of now the open source (free) AI language modelling tools we have access to do not have access to the internet to look up the answers to our questions. It draws on a limited amount of data that it has been fed to generate an answer. This ‘answer’ is a linguistically, rather than factually developed one. The AI does not ‘think’ an answer for you, it merely uses the language of your query to extrapolate what ‘language’ you want as an answer. This means that the answer you get might not even be based on accurate or at least updated data.
How do they know?
How would a detection tool then tell the difference between AI generated text and a text with someone’s blood, sweat, and tears behind it? The short and easy answer is that it simply can’t. The best an ‘AI detecting tool’ can do is to use a host of factors to decide on the probability, or the likelihood that something is AI generated. This can for instance, involve ‘detecting’ whether a text was typed, wherein words and phrases would have different and sequential time stamps or copy pasted altogether, which would be a good indication that it had been generated elsewhere. But of course, that is all that this method can do – determine whether the text had been generated elsewhere before arriving at its final destination, not who, or what, had been doing the generating.
Another tactic that AI detection tools use is to check whether they are looking at a stolen or plagiarised text. Since language modelling tools draw on pre-existing, written texts to formulate their responses, they can frequently return positive for plagiarism. Much like a parrot with a good memory, AI can only ‘speak’ in combinations of words and phrases that already exist. This can be combined with the method described above to see whether a large chunk of text had been copy pasted onto a page before being tweaked – this would indicate that a text had been manipulated to hide plagiarism. Improved versions of the tool, or a dedicated human, can of course easily navigate this issue by paraphrasing plagiarised text. Overall, this strategy is not a particularly effective method of filtering AI generated content either.
Most AI use a combination of such tactics, together with textual analysis to determine whether the text under consideration is AI generated. The textual analysis part of it entails determining whether a text is ‘just’ a sequence of predictably occurring words. This is the metric by which the AI judges whether it’s looking at something written by a human or an AI. There’s of course just one glaring problem – most of human communication consists of predictable sequences of words and phrases. Even a particularly creative wordsmith does not write in wildly unpredictable sequences of words, as this would make it incomprehensible to the average reader. James Joyce of course springs to mind, mostly in relation to what a difficult read his works are.
Whether differentiating between AI generated content and human endeavour is even an important exercise is a question that is undergoing intense debate among writing and publishing communities. What proponents of either opinion can perhaps agree on is that neither the publisher nor the reader deserve to be deceived as to the nature of what they are reading. What I personally want to see happen is not something I’ve decided on yet either. On the one hand, it is a relief to think that the more monotonous, repetitive parts of writing could be replaced with a machine. On the other hand, it is in these monotonous, repetitive pieces of writing that newcomers to the trade earn their chops, and truly talented artists stand out. It is a fallacy to think that delegating it to a machine does not rob writers of valuable opportunities.
Just earlier this week, I had the dubious honour of being told that my writing was flagged by an AI too. Just one problem – I wrote it myself. And no, this entire piece is not an attempt to defend myself.
The incident provoked some serious thinking of what I had learned about language and how it is used. Aren’t all human communication strings of predictably occurring sequences of words? Does this mean that language entraps expression instead of liberating it? And what does my humanity amount to when I am forced to sit down and rewrite a text I had put together with great care to sound acceptably ‘unpredictable’ to a language modelling tool detector? Isn’t the machine finally dictating the man? What does it mean that this first confrontation between man and machine is taking place on the linguistic frontier? Will the next generation of students have to sit down and comb through thesauri to find unusual words to write their essays and book reports? And aren’t I glad that I graduated before all this happened?
My understanding of language, despite several heavy-duty degree modules on the subject, are primarily shaped by one famous philosopher and one famous fantasy author. Ludwig Wittgenstein, an Austrian philosopher, said that the world we see and experience is defined and given meaning by the words we choose, so that the world is what we choose to make of it. This idea gives prime importance to language, as the world is given meaning through language, instead of the other way around. Terry Pratchett, a British author, piled on to the argument with the quote: “we are trying to unravel the Mighty Infinite using a language which was designed to tell one another where the fresh fruit was”, effectively hurling language’s role in making sense of the world we live in into the metaphorical dustbin. Perhaps an eloquently posed question to one of the paid versions of ChatGPTwill give us the correct answer.
Congratulations @hypeeconomy! You have completed the following achievement on the Hive blockchain And have been rewarded with New badge(s)
Your next target is to reach 50 upvotes.
You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word
STOP
Check out our last posts: