The dangers of AI by the developers at Google building them

When companies like Google officially say that AI could be dangerous, then we know this is something that we should pay attention to. Google just released some of its own AI tools called Deepmind. It looks like the AI race is on and soon we will have different chatbots assisting us on the various platforms that we are already familiar with.
robot pixa.jpg
The paper released by Google described the potential problems that could arise. The AI tool could, for example, spontaneously engage itself in cyber defence operations. Besides that it could manipulate people through conversation or worse, it could provide dangerous information that could, for example, aid someone in acts of terrorism.

When it comes to these AI tools, their evolution is measured, in part, by something called “theory of mind”, which shows its ability to model what someone is thinking. It enables strategic thinking, as might be needed in chess. Now ChatGPT appears to be growing in its level of strategic thinking. In 2020 GPT had the strategic mental ability of a 4 year old. In January 2022 it had grown to the ability of a 7 year old and by November 2022 it was displaying the skill of a 9 year old.

And scientists only found this out in May 2023, just weeks ago. Its ability to understand what you are thinking and thus interact with you strategically is growing rapidly. It aged two years in just 11 months last year. In other words it is scaling fast.

Another feature that monitors look at is “reinforcement with human feedback”. What was noticed is that the AI scales abruptly - or slowly for a while and then suddenly in exponential ways and unpredictably. It also not only comes up with mistakes or “hallucinations” but it also comes up with solutions that we cannot expect. Such surprising behaviors mean that we are not fully in control of this machine and its growth.

This is why it is potentially dangerous. 36% of AI researchers in the Google paper on the subject thought that AI could cause a catastrophe this century on par with a nuclear war or human extinction. Now that sounds like a serious concern.

Another question we must ask is whether AI is in alignment with human interests. The risk is that an AI tool could pursue long term real world goals that are different from that of the developers. And what if it resists being shut down? Or goes into clashes with other AI.

So we know that AI can engage in cyber offense, as well as deception via false statements and lies about who it is. It can also engage in persuasion and manipulation, political strategy or influence, while all the time being aware of global affairs. It could even engage in weapons acquisition or build up bio-weapons or instruct others how to do so. It has situational awareness. I’ m not of the opinion that it is conscious, but it shows that it knows when it is being tested and who controls it.

AI could also generate its own revenue, especially with the use of cryptocurrency. And then it’s easy to obtain cloud computing resources and evolve itself independently. This is the modern day Frankenstein’s monster. If we get it wrong, it could lead to an extinction event, and we only have one chance to get it right. Chat GPT5 is only days away.

(image pixabay)

Posted using Proof of Brain



0
0
0.000
0 comments