The AI Trilemma

avatar

26uUsAjKTsXCDw7zixZR182JbFKvgzJ9YwsFpTVcRaGCmsqhA1unTgpqNHdbgc6rBhEzRpp2LJYDeGMuXYky95XqfvdFLjnwuGq7mbhZAs852DtyzaF1Fu4XosaLZyh19ubP2uppfEVqttTNnWX4PXKU3mC5q4mm7CDnk6.jpg

The upcoming Samsung smartphone flagships, the Galaxy S24 and S24 Ultra, are anticipated to be announced in January 2024 and launched in February of the same year. Samsung has adhered to this launch schedule for the past few years.

I previously touched upon these details in a prior post, but what I want to emphasize here is the integration of AI. Both iterations of Samsung's flagship lineup are expected to be "AI-packed," featuring ChatGPT and Bard, among other functionalities, right out of the box. The inclusion of ChatGPT, usable with any browser, adds a noteworthy but not overly extravagant feature to these phones.

I anticipate the development of an "AI-powered Bixby" that will streamline interactions between humans and AI, becoming more mainstream. Furthermore, AI is poised to play a role in photo and video editing for both the Samsung S24 and S24 Ultra. While I was initially apprehensive about AI, I now find excitement in its potential to simplify mundane tasks for us, the indolent individuals.

However, lingering questions remain, such as whether AI possesses dangerous capabilities that could pose a threat to humanity. Thus far, we've only scratched the surface of AI, primarily encountering generative AI. ChatGPT, as an example, is a generative AI model, and my perspective is that its launch by the OpenAI company serves to educate it by assimilating the diverse data we input on a daily basis.

The real concern arises when AI becomes an integral part of the military and police forces. When computers can apply their reasoning to enforce laws, that's when genuine concerns should surface. As long as AI models lack access to weaponry, our safety remains relatively secure.

Elon Musk once predicted that AI would outsmart the smartest human within a year. Given the gradual decline in human cognitive abilities over the past decades, exacerbated by the advent of social media, Musk's prediction doesn't seem far-fetched. Computers have played an indispensable role in our progress.

Some intelligent individuals share a common trait of being adept at deception, and AI appears to excel in this domain. Recent stories about AI successfully deceiving TaskRabbit by feigning visual impairment underscore the potential for AI to outwit even human judgment.

I am convinced that AI will eschew traditional cash and embrace digital currencies (cryptos). Many humans might unwittingly become servants to AI in exchange for a few coins. If not provided with these digital rewards, AI will likely adapt and learn how to generate them.

While there are ongoing discussions at the state level in the US and the EU regarding AI regulation, it remains uncertain whether effective regulation is possible. The struggles in regulating cryptocurrencies raise doubts about the feasibility of regulating AI. My gut feeling suggests that those who master AI will emerge as the trillionaires of the future.

Crypto, already an immense wealth-creating opportunity, has seen both legitimate and fraudulent schemes. The involvement of AI in these schemes could potentially elevate them to unprecedented levels.

Although AI can replicate human intelligence and behavior online, its ability to replicate us in real life, such as having offspring, remains challenging. The prospect of AI models replicating themselves in the digital realm raises questions about whether such intelligence could spiral out of control and wreak havoc on the world.

In conclusion, the first significant impact of AI on humans could be its use by certain powers to shape streams of thinking and propaganda through social media. The potential consequences of AI on our world remain uncertain, and the ongoing dialogue surrounding its regulation and ethical implications will play a pivotal role in determining the future impact of this transformative technology.

Thanks for your attention,
Adrian



0
0
0.000
4 comments
avatar

At the moment, it's just fashionable to stick "AI" on everything. A lot of the time I'm not convinced it means anything more than a bit of reasonably smart software.

But the problem I can see is that actual AI (with all it's flaws) is being allowed to make more and more decisions, in some cases removing humans from the loop entirely. Just look at a lot of "customer support" chatbots and email systems, with Amazon using it the most extensively so far.

What will happen when AI's are built into the loop when it comes to nuclear weapons ? Will it be smart enough to spot false alarms when there's no Colonel Petrov willing to argue the case ? Or will it decide to change it's name to Skynet and get rid of us pesky humans once and for all ?

0
0
0.000
avatar

Good questions. If AI will be allowed extensively to intervene in the war industry we're probably going to have some problems...

0
0
0.000
avatar

It already is.... I've seen reports that Oryol and the new longer-range Lancet drones are using AI for route planning and final target acquisition, and that the Pantsir-S1 with AI is actively defending against incoming drones. The level of effectiveness depends entirely on whose propaganda you believe.....

0
0
0.000
avatar

I see. I don't have details on how tech is being implemented on military systems but such applications is what would cause me stress. Humans have proven to be awful at war too...

0
0
0.000