Might sound conspiratorial but I had a thought in my head the other day that I wonder if a lot of the things on Twitter and other big media companies is just filled with thousands or millions of bots talking about these talking points and shit keeping the same message. We know there’s insane amounts of bots on there, what’s to say that they don’t have them all programmed to keep the conversation going in a particular direction? @cmplxty
I have had the same thought for a long time as well and I am "pretty sure" that this is happening with increasing reach, across an increasing number of platforms. For example, I play Words with Friends and while I only play with one person I actually know (we have been playing together for a decade or so now), I do play "quick play" competitions with strangers relatively often also. However, a couple years ago, I have noticed a couple "glitches", where I recognize the profile pic of the player, but the name has changed, as well as times where the name has been replaced by a Zynga (developers) profile number.
Are they glitches?
Me being a little bit of a conspiracy thinker of a kind, suspects that what is actually happening is that in order to boost player number transactions that are used to inform marketers as to the worth of advertising on the platform, the developers are adding bots into play. After playing for so long, I feel like I even recognize the patterns of the bots, as they seem to be programmed to take longer to play a word, waiting to the last couple seconds before time runs out. Why would they do this? Well, Time on Site ToS is one of the metrics that advertisers use to predict reachability and impact, so it makes sense to boost the numbers.
These are four of my friends:




None of them exist.
A few years ago, an Nvidia code called StyleGAN GAN = generative adversarial networks) was used to create an awesome page that most must know by now, https://thispersondoesnotexist.com/ - where a an AI-generated person is created on every refresh.
These cats don't exist either:


Now, combining this with what @cmplxty was mentioning, and a relatively realistic character profile can be made that an AI can then direct to influence potential readers, or players as the case may be. On top of this, a lot of the news these days is also AI generated and it is also leveraging social media platforms to flesh-out its content and support different views.
Using groups of AI-controlled profiles, pictures and supporting AI-generated content, it is possible to not only inject support transactions like likes and follows into the system, but also build entire catalogues of material over time that can be used to support the AI character itself, giving credibility and increasing influence over the audience and fooling many into interacting with them, giving more credence to the account and, more transactions.
With so much of the public discourse being social media powered, it is highly probable that AI bots are generating a huge amount of the online content and directing the direction of the conversation. And is it any wonder that so much of the content is largely the same?
Can you really call yourself a blogger if you aren't writing the content? It is like saying,
"I am great in bed, because I pay a gigolo to do the job."
If "normal people" are using these kinds of AI content creators, just imagine when there are trillions of dollars on the line through advertising revenue on these mass-media platforms - do you think they are not using them, do you think their "integrity" wouldn't allow it?
I suspect that they are very, very common and for the most part, the average person will not be able to pick real from fake, because even if they were able to given the time, the sheer volume and speed of content delivery means that they will not be able to "fact check" each piece, which means they will use their intuition to decide what is real or not. And, since intuition is trained by experience, it is also going to be influenced by the conditioning of practice, meaning that people are accustomed to seeing this content and assuming it is real - so when familiar content is presented, the heuristic is to trust it.
There really is nothing to stop these kinds of accounts from "co-existing" (irony) with real accounts and becoming so ubiquitous, that it is impossible to tell one from another if using purely human faculties. Not only this, the policing of bots is done by the platforms themselves, which means they are able to identify external operators and use them to train their internal AIs to become more natural too.
There has been growing concern over the usage of deep fakes to essentially superimpose faces and voices over existing content to change the narrative and the better it gets, the less we can trust what we see and hear. They have been bringing actors back from the dead for a while now, but the software is advancing at such a speed, that it can be applied to anyone, within moments. Can we trust the reports on a war? That eye-witness account of a robbery? The person shilling medical advice?
Who is held accountable for incorrect information, when the source doesn't exist?
Like it or not, you have to admit, we have created a very strange world for ourselves and one that is increasingly becoming less trustable. Seeing isn't believing anymore and as our experiences and interactions become increasingly non-physical, the less informative our experience becomes. We are constantly bombarded by messaging across all kinds of topics that influence what we believe and therefore, the decisions we make, but we have no way of knowing how much of the truth, ever existed at all.
We are now finding out what happens when Non-Playing Characters, start to play.
What is non-existence if what doesn't exist, affects the physical world?
Taraz
[ Gen1: Hive ]