Caught in the Web

avatar

Talking with a colleague yesterday, they mentioned how one of the problems they see with the transparency of blockchains being a benefit, is that it is an error in thinking that more information helps us make better decisions. Their evidence for this is that looking at how we have more information than we have ever had before at our fingertips, we aren't actually making better decisions.

image.png

While this is true, there are three problems with the information we currently have. The first is that most of the information that would facilitate our decision making, isn't actually available to us, as it is collected, collated, stored and sold by a handful of corporations. The second is that the information isn't in a usable form for us, because the sheer volume of it makes it impossible to be practically applied to our lives for the most part. And the third problem is obvious, much of the information available, isn't actually trustworthy.

However, as I was explaining to my friend (the colleagues I discuss these things with are often also friends), These three things are related, because if we were able to trust the information, there would be a lot less information we would need to consider, but in order to trust it, we would need to have access to it.

Essentially, what we would need to build is a web of trust system, where each bit of information is able to be verified and traced. As perfect information doesn't exist, there will be noise and errors in the data collected, so after verifying enough points of data, a confidence level could be applied to information. This way, a simple filtering system would then be able to for example filter out all information that has less than a 90% confidence score.

However, even if we were to omit "untrusted" information, there would still be far too much to actually sort through and apply, however, it would also be possible for an AI to group similar information and create "rules" that take it all into consideration, but also leaves room for error.

We do this already as humans, where we use our own diagnostic capabilities, paired with our experience and social learning to apply stereotypes to our word, creating rough rules of how to live our life. However, we are excluding far more relevant information than we are including when we do this, because we just can't experience enough, learn enough, or hold enough information in consideration and affected by circumstance at any one time.

Our heuristics help us navigate the world by saving us a mass of mental energy, but they are always imperfect. And, while an AI system is not going to be perfect, if it is able to sort through and make adequate sense of a very large volume of data, it will be better at it than us.

For example, I was talking about ChatGPT, the AI bot that can answer questions. And, while in the news there are stories of it making errors, the reality is that based on the information it has available and its ability to answer questions on any topic, it is "smarter" than every individual human on earth, even if collectively, we are smarter than it, or at least some part of us.

For instance, if it makes an error in calculating some mathematical equation, there is likely only a handful of people who would be able to identify the error it made, whilst the other 99.99 percent of us are none the wiser. And, it is able to do this on pretty much any topic we that we are specialized in, because it utilizes the information we are feeding it, information from experts. It doesn't have to be a mathematician, it just needs to be able to apply the equations already in existence. This means that when compared to anyone on earth, it is more practically intelligent than any individual on earth.

However, that doesn't mean it has to dominate us all. Instead, it could be used as a way to help us create useful heuristics, by collecting data from the best, parsing it through the gauntlet of related and influencing factors, as described by other experts in those fields, that then push it through more filters generated through data collected from real-world experience to predict what is most likely to work.

Would you trust the suggestion?

This is an interesting question, because we trust experts everyday to give us information, even though they are far less knowledgeable and far less rigorous with their process of testing and comparison. Then there is the question that given this suggestion from a source we trust, we are likely to follow the recommendations.

We can see this now of course at a far simpler level, because streaming services make recommendations for us "based on our previous viewing" and we trust it, never actually considering if what is suggested is based on what we viewed at all. And while it might loosely affect it, what they tend to suggest is what "coincidentally" make them the most money, or elicit the behaviors they are seeking from us, or a group of us. Just like Facebook was able to target groups of voters and influence their decisions.

Yet, without transparency on the information, we can't verify if what is being suggested is valid, or loaded by a profit maximization algorithm. It isn't "provably fair", which should immediately make us consider it untrusted information and invalidate it, because we know that the corporations pushing the suggestions are definitely working on a profit maximization platform - because they are businesses.

This is why decentralized information is the future, if we demand it. And while at this point it seems scary to lose our privacy, we actually have already lost it, it is just that we can't see what is being taken and have little access to how it is being used.

But, the privacy argument in itself is a hijack, because while it is vitally important to keep secrets for individuals for a healthy society (to some extent), the system benefits those who are able to keep the biggest secrets the most, which are the governments and corporations, who load the dice to benefit themselves to the detriment of the masses.

They of course don't want us to demand transparency of transactions, because it would inevitably lead to them losing their power over us, which is based on them being able to know about us, but us not knowing about them. There is a massive imbalance in the informational power structure, yet we do not demand change, because we fear being seen on the surface, even though everything is visible to the few watching below.

We think we are hiding, yet with every click, search and scroll, we are generating data and exposing ourselves more and more - making us ever more vulnerable to being controlled.

Something to ponder, whilst waiting for the spider.

Taraz
[ Gen1: Hive ]

Posted Using LeoFinance Beta



0
0
0.000
22 comments
avatar

Privacy is now something that no longer exists, we only have the illusion of having it but in reality everyone knows about us. I think the best thing is to become aware of it rather than deluding yourself about something that doesn't exist.

As far as information is concerned, the matter is complicated, I believe as always that in any case we should aim at the quality of the information rather than at quality, even if, who can say what the quality suggestion is, the one without error and in any case better, come to the conclusion there is always a discourse of trust.

Do you trust anyone who sells information? I confess to you that I don't trust much, I always prefer to consider alternatives when I find them 😉

0
0
0.000
avatar

we only have the illusion of having it but in reality everyone knows about us.

Yes. Privacy is an illusion. And it is very profitable to make it seem like it isn't.

Quality over quantity for sure is needed, but at the moment, the informational waters are so muddied, almost nothing is usable. It will take time to build processes to capture information well, even after the technology to do it exists, because we will have to change our behaviors in society.

Do you trust anyone who sells information?

I am in crypto. I trust no one.

That is not true. I trust some people even here, but I always know that it is my opinionated trust in them, not that they are trustworthy. It is my responsibility.

0
0
0.000
avatar

Well, trust is definitely something to be earned.
Also you are right, it always remains a personal responsibility.

There, too, the question of quality over quantity is always good. 😉

0
0
0.000
avatar

Congratulations @tarazkp! You have completed the following achievement on the Hive blockchain And have been rewarded with New badge(s)

You have been a buzzy bee and published a post every day of the month.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Check out our last posts:

Be ready for the March edition of the Hive Power Up Month!
Hive Power Up Day - March 1st 2023
The Hive Gamification Proposal
Support the HiveBuzz project. Vote for our proposal!
0
0
0.000
avatar

it is an error in thinking that more information helps us make better decisions.

I've heard this idea before. I can't remember exactly, but I think I've mostly seen it in psychology circles where they show that most of our choices are no better than a random coin flip because we tend to get overwhelmed by too much info and default to making the decision that "feels" best.

Anyway, your three points in rebuttal to that argument are excellent as well as your suggestions that maybe we can utilize AI to help sort through those problems.

And while at this point it seems scary to lose our privacy, we actually have already lost it

We think we are hiding, yet with every click, search and scroll, we are generating data and exposing ourselves more and more - making us ever more vulnerable to being controlled.

I remember reading a story in the paper about how Target identified a young women as pregnant based on what she had been buying and they started giving her coupons on baby supplies, all of this long before she was brave enough to tell anyone around her about her condition. And that's just the data from shopping. With all the more metadata all these corporations have on us from our online actions.... Yeah, privacy is dead.

0
0
0.000
avatar

Yep, it is the people who study psychology that say more information doesn't help, but that isn't the case with AI, is it? It gets better and better. As long as it doesn't become sentient, we might survive ;D

That Target story is interesting, because that actually happened when I was at university - in the 90s... 25 years later and the internet, what can they do now? It is scary really.

0
0
0.000
avatar

That Target story is interesting, because that actually happened when I was at university - in the 90s... 25 years later and the internet, what can they do now? It is scary really.

Was it 25 years ago?? I thought it was more recent... haha man.... time gets away from us. Anyway, yeah, I'm sure all of us would be completely horrified by how much they know about us now. Even those of us who are careful, they still know a ton about us.

0
0
0.000
avatar

I always like using the curtains and envelopes and underwear drawers when talking to people about expectations and realities and delusions of privacy 🤣

I always think people should be able to share what they want to share and governments and corporations have to have completely public transactions and the only things that should be kept private/hidden is whatever actual person data they have which is the absolute bare minimum to run their services and anything extra if volunteered.

0
0
0.000
avatar

There is also the potential to have some information hidden from some parts of the system. For example, medical information can be packeted to be seen by me and select healthcare people, but not by the average joe wondering why I am such an ass :)

0
0
0.000
avatar

LoL XD that should be how things work generally XD

0
0
0.000
avatar

Maybe its the first time that I think about transparency in this way, if monetary privacy doesn't exist in real life lets say we all use a decentralize network and we can identify each other on the network just like Hive that would make corruption and other derivatives obsolete there for privacy wouldn't be needed since we would be living in a better world? my only concern is that those who do well or are wealthy could become a target, I still dont know if 100% monetary transparent network is best for the world, it might work the best in governments, not sure about this

0
0
0.000
avatar

and other derivatives

The derivative market is largely unregulated and non-transparent - it is also worth 3x the value of *everything else on earth....

0
0
0.000
avatar

ChatGPT is a disaster. It is beyond biased.

Data is data, how it is interpreted is not pure science, it is an art.

If all it’s left of internet is ChatGPT, to provide all the information and misinformation, it would be a disaster. It lies too!

0
0
0.000
avatar

It is a disaster today, but it will learn faster than you and I ever could or will.

0
0
0.000
avatar

I like this point. I have never thought that sometimes "less is more" even on information.
Blockchain giving traceability and transparency is a possibility for people to get to know more the product/service they are buying. If they are interested, they are going to look at it. Otherwise, they can be waiting for more curious people to yell at scam if the Blockchain shows sth not aligned.

So, beyond flooding people with info, it's also an external application of the "Theory of the Games".

0
0
0.000
avatar

If they are interested, they are going to look at it. Otherwise, they can be waiting for more curious people to yell at scam if the Blockchain shows sth not aligned.

Yes. So, those who will be scanning will generally be those interested in those specific areas. With a web of trust, this can give a confidence rating also to evaluate the evaluations.

0
0
0.000
avatar

It's tough but technology is improving so rapidly that I think many of us will lose jobs. The more things move digitally, the more AI and technology can be applied. After seeing how things go on Youtube, I have noticed that I need to view everything with some skepticism because we don't whether or not they are an expert. There are a lot of ways to fake things.

Posted Using LeoFinance Beta

0
0
0.000
avatar

Everything is codifiable and eventually, everything will be. This could leave us in a world that allows us to do as we please and live our best lives. Or in the hands of governments and corporations, leave us slaves, battling for scraps.

0
0
0.000
avatar

I think they have to pay us for every click, the bread is in the stomach of lion today.

0
0
0.000
avatar

At the moment, we are setting tasks for AI. But what will happen if several neural networks begin to communicate and AI will have some kind of evolution. What can this lead to. Perhaps the decline of the old civilization.

0
0
0.000