TL;DR: There's a way to type a small amount of text into a computer and have the computer write more content using artificial intelligence that is fairly convincing.
Jeremy Howard, warn of "the technology to totally fill Twitter, email, and the web up with reasonable-sounding, context-appropriate prose, which would drown out all other speech and be impossible to filter"
The technology is called GPT-2, which apparently doesn't stand for anything. If I were to assign a name, I would have it mean: General Purpose Transformer, version 2.
Edit: In the first video, he says it means "Generative Pre-trained Transformer." But I couldn't find that fact anywhere.
The thing that's new about it isn't the architecture, per se. The new thing is how much data it has to work with. As an analogy, it's like the took the design of a gasoline powered engine and made it the size of the moon.
So the bottom line is, they were/are worried that this technology will be used for evil purposes. In fact, the first thing I said to Google when I first started to investigate this was, "How are you going to stop this stuff?" The answer was, "Well, we're going to do a bunch of stuff to try to stop this stuff from happening." And then it turns out, it didn't happen. It turned out that, actually, some of the stuff that Google is doing doesn't really make that much of a difference at all. And if you really believe that this is a problem, then you are really worried about the possibility that somebody is going to get to a point where they have such a powerful and effective weapon that they can take over the world.
Can you find the paragraph in this article that was generated by the transformer? It was the previous paragraph. I just gave it the first line (in bold) and it wrote the rest of that paragraph.
As of the 5th of this month, OpenAI has stated:
We’ve seen no strong evidence of misuse so far. While we’ve seen some discussion around GPT-2’s potential to augment high-volume/low-yield operations like spam and phishing, we haven’t seen evidence of writing code, documentation, or instances of misuse. We think synthetic text generators have a higher chance of being misused if their outputs become more reliable and coherent. We acknowledge that we cannot be aware of all threats, and that motivated actors can replicate language models without model release.
So then maybe they'll release their full model soon. Keep an eye on it (github repository: https://github.com/openai/gpt-2)