Pear Shaped Information

Apples don't grow on pear trees.

Seems pretty obvious. But are the flowers in the next picture taken in my yard, from an apple or a pear tree? Perhaps it is from one of the cherry trees? Maybe it is from another tree all together.

How can you tell?

image.png

The right information.

And I am noticing more and more how many people are essentially doing the same thing as the simple guessing game above, but with far more complicated and complex information, as they are relying on generative AI to provide information that due to their own lack of experience, they can't tell whether it is quality or not.

A horticulturalist would likely know which tree.

Because they have the experience required, or they can get pretty close, because they also have the experience to identify what tree it is not. But, so much of the information that is coming into us isn't from our area of expertise. And because it lays outside of our wheelhouse, we aren't able to determine the quality of the information we have been given, we just have to trust it.

It is from an apple tree that sits outside my office window.

Trusting untrustworthy information isn't a new problem and has been going on for many decades, centuries and probably millennia. We are social animals built to share through stories to pass on what we know, but we have also learned that we are able to manipulate each other by giving incomplete, misinformation., or by telling unverifiable stories - the way of religions.

For instance, a more recent example was from a family friend who rides horses at a stable where a teen girl fell off and died. We were visiting and my friend was really upset, because with close proximity knowledge to the events and details, she recognized that the newspaper got the story quite wrong in many ways. This was seen as even more unforgivable, because the newspaper headquarters were less than fifteen kilometers from the stables.

However, after reading us excerpts, pointing out incorrect facts, she then turned the page and started talking about another article about the war in Crimea (this was a decade ago) and how terrible all these things are, reading out facts from that article. At no point did she see the irony of her behavior, she just trusted. If the journalists couldn't get a simple story of a girl falling off a horse correct, what are the chances of them representing complex geopolitical conflicts well?

But, while the story of misinformation isn't new, the problem now is that we have so much information at our fingertips, being brought to us through what is effectively a black box (even if the links are provided, no one researches credibility on all information), across complex topics that we have very little to no direct experience with. Not only this, we are also getting served this information without even searching for it, where algorithms are pushing it through our feeds, and being human, we tend to believe that it is vetted information.

On top of this, depending on which preferences we have, we are also pushed information in siloes in order to evoke an emotion, belief or action from us. For example, last weekend there was a stabbing event in a shopping mall in Australia, and the information that came out was saying the perpetrator was a muslim, a jew, a person who was actually innocent, a spy for multiple sides... the list went on with each being shared by siloed groups wanting to believe one story over another, because it suited their purpose, their agenda.

It was a person who suffered from schizophrenia.

Not exactly the face of evil that was being reported. It was a sick person. However, when someone stabs mostly women and a baby, it is easy to attribute motives that might not have been there at all. As terrible as the event is and for the families and friends of the victims, what should really be considered is why so many people have mental health problems these days. Rather than looking at the outcomes, go upstream and look at the conditions that lead to them.

Conditions like the quality of information being spread, and the lack of verification.

What we have to consider these days, is that the quality of the information we receive from AI programs, is only as good as the information that is fed into it, and for the most part, we don't actually know what that is. Not only that, we don't know what the AI programs are doing with that information, how they are verifying it, and what kinds of weighting is being applied when there are conflicts. We have seen quite a few "bias" problems coming through the results of late.

Of course, AI could be applied in many ways, where for instance if the pool of information it is using as a resource has already been verified. For example, if all the formal documentation for a company was used as the data set, generative AI results will be quite solid, because it is contained, and it would be based on the expertise of the many people in the company.

We are what we eat.

And AI results are reflective of what it is fed also. Feed it shit, and it will provide generative shit. Feed it informational experience, and it will turn it into some form of wisdom, being able to give accurate insights. However, most of what we are seeing so far in the AI space at the general level at least, is largely coming from unverified sources. And even if the results seem okay, if experts look at the results, they will pick out errors or at least potentials for conflicts and differing opinions. For us though as laypeople on most topics we interact with, it all seems plausible.

Just like the picture from the pear tree outside my office window.

Taraz
[ Gen1: Hive ]

Posted Using InLeo Alpha



0
0
0.000
30 comments
avatar

This reads like. The easier things seem to have become, the more complex they are. Not an easy terrain to navigate for our children.

0
0
0.000
avatar

I think everything is rather complex when it comes to humans, but we try to deal with them with algorithms that can never be dynamic enough.

0
0
0.000
avatar

I think some journalist reach insufficient information so that they complete the newswriting with some made-up or hearsay information which leads misinformation.

0
0
0.000
avatar

Remember that they are part of the attention economy also. They have to beat the others - truth doesn't matter as much.

0
0
0.000
avatar

A.i being an aggregator of information is largely unreliable. Big tech and media has worn again by heavily marketing a half baked product. Unfortunately masses have already bought into it.

0
0
0.000
avatar

Masses will always choose convenience.

0
0
0.000
avatar

I actually wore a post a while ago about an AI chatbot that you feed the data into. You still have to verify that the sources are accurate, but as long as you have done that, you can be pretty confident the information you then mine is quite reliable. It's kind of a cool idea. You could for example load in twenty different articles from scholarly journals and as long as you trust the source, move forward from there.

0
0
0.000
avatar

This is where it is moving, where information feeding in is contained. But it has some ways to go, as most companies for instance don't have good information repository hygiene - it is a documentation mess.

0
0
0.000
avatar

At no point did she see the irony of her behavior, she just trusted

Did you get a completely uncomprehending blank stare when you pointed this out? Or did you point it out?

I remember that being my major turning point for "mainstream" news (they reported on a topic I knew a lot about and got it so horribly and violently wrong like literally the most basic of internet searches before chundering that excuse for an artcie out would have gotten better information than that complete failure). This topic wasn't that hard and very much not niche, if they couldn't get something that mind numbingly basic right then they quite simply couldn't be trusted on literally anything at all.

But when I pointed this out to other people, they basically just didn't want to know and I was just being paranoid delusional. Literally straight off the back of acknowledging every single error I pointed out in the previous thing.

Conditions like the quality of information being spread, and the lack of verification.

I remember an ex-friend that I went to uni with voicing something similar years ago, but it was along the lines of "there's a study that proves any point of view". A number of us doing science and engineering (we were both biology majors) had noted this at least on a subconscious level at the time but this person was the only one to throw it at Facebook, possibly because they found a study "proving" something they fundamentally disagreed with.

quality of the information we receive from AI programs

I don't know about other information sources that AI gathers from, but I do know that despite it being "minimal effort" (according to people online for whom any source of making things that tiny bit harder is "minimal effort" to get around, I don't know whether it's genuine advice to not bother expending effort on something that's not going to get the result people are hoping it will or trying to stop people from implementing these extra effort measures) a number of artists glaze their art to screw with the generative art AI things and it's screwing with it enough that the developers of at least one of the things is having a cry about how they should be exempt from copyright because their technology is far more important than any stupid artist's feelings about their work will ever be.

I am aware of the existence of at least one article written about it but didn't care enough to note what it was or where it came from at the time so all that's left is this unverifiable memory XD

I feel like you should have changed the location as well as the type of tree to see if anyone noticed XD

0
0
0.000
avatar

Or did you point it out?

Didn't point it out - she was not in the frame of mind to take the observation :)

if they couldn't get something that mind numbingly basic right then they quite simply couldn't be trusted on literally anything at all.

Precisely. It is amazing how many people don't actually wake up to this though.

I get that people think that AI generation through prompts is their own creativity, but it really isn't in my opinion. Any string of words will result in something and like they say, enough monkeys on a typewriter will eventually create the bible. It isn't enough in my opinion - there is too much random, not enough control over the process to call it "own work" - unless you are coding the AI itself of course - that is an art!

I feel like you should have changed the location as well as the type of tree to see if anyone noticed XD

Location was samish ;)
(It is a pear tree)

0
0
0.000
avatar

Finding unbiased news is difficult these days. It feels like everyone has an agenda or are trying to weave a narrative. I guess there is also the want to be first on reporting it, so they are writing it with incomplete research. Unfortunately, we can only rely on others for these things. It's not like we can be in multiple places at once, and do the necessary investigation for each. I guess we just need to reward those that are giving unbiased reporting, and stop support those that lie.

0
0
0.000
avatar

I guess there is also the want to be first on reporting it, so they are writing it with incomplete research.

This is a big part of it - because the first gets the attention. Accuracy is no longer a competitive advantage, it is a detriment to speed.

0
0
0.000
avatar

"...why so many people have mental health problems these days"

Buddy, I don't know if you've been on other social media lately, but it is a huge trend for people to self-diagnose and have a weird mental health fetish lately. It's like a point of pride and a new norm. Pretty wild.

0
0
0.000
avatar

At the official level it is too - it is cool to be mentally ill. :D

Perhaps they should shorten all the buses to accommodate the new culture.

0
0
0.000
avatar

I guess technically speaking it is insane that it's happening.

0
0
0.000
avatar

I do think the news organization need to do better on the news articles. At least they should update the articles if people tell them that things are wrong. However, that won't stop people who just read the headlines and don't see the old article. News tends to be time-sensitive but I would expect them to at least get the facts right.

I think its interesting to hear that feeding AI shit will return shit. It does make sense because AI is kind of inaccurate and from what I see, sometimes it just makes things up.

0
0
0.000
avatar

At least they should update the articles if people tell them that things are wrong.

Front page headlines get "corrected" at the back of the paper. They should have to be front page apologies too.

I think its interesting to hear that feeding AI shit will return shit. It does make sense because AI is kind of inaccurate and from what I see, sometimes it just makes things up.

"Intelligence of the masses" :)

0
0
0.000
avatar

Yea I agree that they should be front page apologies but we all know that the news companies don't care.

0
0
0.000
avatar

Journalism standards have declined over the years. I wonder if that’s a people trend. If we take this as a bench mark for technologies like A.I then man kind is in trouble, in that despite A.I is bound to be used as a tool to rip off people and not for the greater good, which is often neglected if mouth watering returns are to be made for share holders

0
0
0.000
avatar

AI is not for betterment, it is for profit. The way companies are looking to use it is to cut costs, which means cut people.

0
0
0.000
avatar

What we have to consider these days, is that the quality of the information we receive from AI programs, is only as good as the information that is fed into it, and for the most part, we don't actually know what that is. Not only that, we don't know what the AI programs are doing with that information, how they are verifying it, and what kinds of weighting is being applied when there are conflicts. We have seen quite a few "bias" problems coming through the results of late.

Dear @tarazkp !

Are you saying that the people who created AI are using AI to provide false information to the public?
I agree with you!

By the way, I was wondering if you knew of a way to eliminate information that AI manipulates.
How do you think you can distinguish information manipulated by AI?

0
0
0.000
avatar

Information needs to be put through a matrix of checks and balances for verification - a "web of trust" to provide a confidence score to information, using multiple points data to cross-reference. Once that is done, the AI is then able to sort through and provide a confidence score to what it generates also.

0
0
0.000
avatar

I could tell a lot of stories from this post by way of comments. But I learnt just one simple lesson: I can never give what I do not have; I can never produce what I don't have the ability to. And if I rely on other sources to help me, I could easily be deceived without knowing
Thank you for such a message.

0
0
0.000
avatar
(Edited)

I don’t trust the AI very much about informations even though it can be true most times. There was something that happened in my presence and I asked AI about it but most informations I was given were wrong
It’s just like some journalists. They won’t wait to get all the informations. They’d just complete it with some write ups from their brains

0
0
0.000
avatar

It isn't true most times, it is just different degrees of wrong :)

0
0
0.000