I have recently read a not very favorable book review:
Tom Simonite does not keep it simple. He doesn't give you enough info on a subject to make the reading of the book enjoyable. He has over 400 pages of footnotes, so that is a way of getting your work for a subject out of the way. This book was so depressing to me, I can't even talk about it without feeling like I want to punch the kindle.
There would be nothing special and interesting about this review, except the one fact: it was written by Artificial Intelligence.
Nowadays, AI is able to generate a complex and elaborate texts, that can convince human that they were written by another human. This creates a dangerous potential of generating fake news, reviews and social accounts. It also poses a threat to the economy of Steem Blockchain, which I already mentioned in one of my previous articles.
Live by the sword, die by the sword
What should we do to prevent such abuses then? It may sound ridiculous, but developing another AI is a decent solution.
Researchers at Harvard University and the MIT IBM Watson AI Lab have developed a new AI tool called Giant Language model Test Room (GLTR), which is designed to detect patterns characteristic for machine-generated text.
When generating a text, AI usually depends on statistical data and chooses the most encountered and matching word, not focusing on a particular meaning. If the text contains many predictable words, it was most probably written by the machine.
GLTR evaluates each word and highlights it according to its probability. Green indicates the most predictable, purple the least ones.
Text written by human:
Text written by AI:
In order to test the efficiency of GLTR, Harvard students were asked to determine whether the given texts were generated by machine or human. Without using the tool, they were able to properly define only half of texts. With the help of GLTR, their efficiency increased by 75%.
Out of curiosity, I tried to scan my own publications from Steem and this little experiment confirmed my belief that I am a human :)
Share your view
Do you think this is a correct way of fighting with the inappropriate implementation of Artificial Intelligence? I can't get rid of an impression that this is a kind of vicious circle leading to nowhere. Additionally, I think this is only a temporary solution, as in the future machine-generated text will be so diverse, creative and non-linear that it will be impossible to spot it by analyzing it statistically. Unfortunately, GLTR and similar algorithms have not many possibilities for further improvements.
Here you can test GLTR for free.