That's Fairly Convincing

in The Collegial Spiritlast year (edited)

TL;DR: There's a way to type a small amount of text into a computer and have the computer write more content using artificial intelligence that is fairly convincing.

Jeremy Howard, warn of "the technology to totally fill Twitter, email, and the web up with reasonable-sounding, context-appropriate prose, which would drown out all other speech and be impossible to filter"

Source: Wikipedia

The technology is called GPT-2, which apparently doesn't stand for anything. If I were to assign a name, I would have it mean: General Purpose Transformer, version 2.

Edit: In the first video, he says it means "Generative Pre-trained Transformer." But I couldn't find that fact anywhere.

The thing that's new about it isn't the architecture, per se. The new thing is how much data it has to work with. As an analogy, it's like the took the design of a gasoline powered engine and made it the size of the moon.

So the bottom line is, they were/are worried that this technology will be used for evil purposes. In fact, the first thing I said to Google when I first started to investigate this was, "How are you going to stop this stuff?" The answer was, "Well, we're going to do a bunch of stuff to try to stop this stuff from happening." And then it turns out, it didn't happen. It turned out that, actually, some of the stuff that Google is doing doesn't really make that much of a difference at all. And if you really believe that this is a problem, then you are really worried about the possibility that somebody is going to get to a point where they have such a powerful and effective weapon that they can take over the world.

Can you find the paragraph in this article that was generated by the transformer? It was the previous paragraph. I just gave it the first line (in bold) and it wrote the rest of that paragraph.

As of the 5th of this month, OpenAI has stated:

We’ve seen no strong evidence of misuse so far. While we’ve seen some discussion around GPT-2’s potential to augment high-volume/low-yield operations like spam and phishing, we haven’t seen evidence of writing code, documentation, or instances of misuse. We think synthetic text generators have a higher chance of being misused if their outputs become more reliable and coherent. We acknowledge that we cannot be aware of all threats, and that motivated actors can replicate language models without model release.

So then maybe they'll release their full model soon. Keep an eye on it (github repository:



Already better quality than a good deal of comments...including this comment of mine =/

 last year 

It's uncanny. It matches the style of the prompt so well, if you type in a 6th grader's book report, it'll continue as a 6th grader's book report.

I'm not sure when it will be able to emulate complex authors, but its discourse is already well beyond populist politicians.

Combined with deep fake videos, we may see the rise of automated piliticians...that is an a candidate agrees to run but has a team of a dozen individuals running speeches through ai and making videos. Alternatively nasty old politicians could give themselves a younger and warmer avatars or several each pretending to be a team.

It will be like those old diaolgue writings of the renaissance where the writer plays his own devils advocate....except way more convincing, even in real time.

When journalists make up opinion pieces and surveys now, we usually catch it, but this will make it harder and harder.

 last year 

It would be interesting if an automated candidate made proposes that the real candidate had to keep. I remember years ago that they were having trouble with an AI that kept being racist and the programmers couldn't figure out why. It turns out the training data had some "baselines" that needed better filters ... or something.

Guess that's going to make quality curation here even harder. I just tried it out. The first sentence is mine. Interesting that the AI was able to get "Steemit" from "Steem", but it got wrong.

The Steem blockchain lets users participate in social media interactions and earn rewards for their activity. is the organization of which the Steem blockchain and the core cryptocurrency Steem are designed to run on. This organization does not run itself, but takes care of much of the administrative work and the community is responsible for any issues the Steemit platform has. This organization also deals with all the marketing and updating of the Steem platform.

The goal of the Steemit community is to share content with other users and earn rewards for their contributions. There are other ways to earn Steem too, such as producing in-game items, selling Facebook likes, and more. While the Steemit platform is not designed to be full-time jobs, it is designed to be a network that focuses on fun and community development. The reason for this is that the Steem blockchain is decentralized and allows the community

 last year 

It can be handy if you have writer's block. Just let it rip but don't copy/paste it. Just seed the random ideas. I think that's a pretty good use for AI in the creative space, in general.

It is always something. Now we have this lovely stuff to look forward to battling.

Not long ago, SteemPlus accused me of being a bot.

Proof of Brain... bye-bye!

That's cool. I agree, this would be a nice way to get ideas.

 last year 

Right? People always look at the worst case scenario with AI. We can't forget that the purpose of technology is to make life easier. That's true for a hammer as well as AI.

Can you imagine when the hammer was first invented, people going nuts talking about the Hammer Singularity when everyone is running around bashing people over the head?

True, if we focus on the bright side of the technology, it'd be possible to explore it on its fullest at a fast pace.

Wish you a lovely day

 last year Reveal Comment