Humanity loves innovation. We create things that enable us to restore lost functions, cross vast distances, and connect us to those important in our lives and hearts. Humanity also loves irony. It is ironic, for instance, that we choose to bestow upon some of our military creations a name that has brought untold horrors to the imaginary world.
Weak Artificial Intelligence
Weak or narrow AI simulates human cognition. It allows for automation and analysis of a task in a manner not possible by human beings. The COMPAS Program, for instance, is a "narrow"-AI program meant to provide speed and accuracy of analysis within the Wisconson judicial system.
The Wisconsin court system supports the Northpoint Inc program to the degree that judges meet out sentences based upon its recommendations. They issue these sentences almost blindly, given its apparent bias against people of color. Of course, those at Northpoint vehemently defend their product and its astronomical accuracy of 65%. Sixty-five percent of the time, it works every time.
It seems that the Wisconsin Supreme court is also in support of such high inaccuracies and doubt. Part of their ruling indicated that the use of COMPAS isn't the only tool a judge uses. However, the judge handing out the sentence actually did use the COMPAS weak AI for sentencing.
Jake Frankenfield, in his Investopedia article entitled, "Strong AI", holds that Strong Artificial intelligence represents the "theoretical next level of artificial intelligence". The Strong form of AI represents complex reasoning, gameplay, and even consciousness. At the same time, there isn't a hard example of achieving consciousness.
One example of this attempt involves Google's Deepmind project. They haven't achieved consciousness yet, but they are making significant progress in determining shapes of 3D proteins faster than ever thought possible.
There are plenty of arguments against the ability of robots to achieve consciousness. Examples include the Turing Test and the Chinese Room Argument, to name a few. However, as time progresses, those examples will be reached by AI. I doubt John Searle, creator of the Chinese Room Argument, even conceived of a world where an AI can pilot a vehicle or a fighter jet.
Beginning of an Ending
Elon Musk - YouTube @ 10s, participated in a talk where he described a fundamental flaw in the development of Artificial Intelligence. Smart people working on AI, to paraphrase his statements, could not conceive of a computer being smarter than they are. As a result, those smart people discount that possibility instead of protecting themselves and society from it. He describes an unending flaw of arrogance in humanity that has led us to catastrophic events like Hiroshima and Nagasaki.
To my surprise, Elon stated that AI was more dangerous than nuclear weapons. According to Musk, we wouldn't want to create nuclear weapons just anywhere, so why would we create a conscious AI without regulation?
A regulation exists within the United States, but it is a bill meant to spur its development. Named S.3771, FUTURE of Artificial Intelligence Act of 2020, the bill acts to enhance further funding for research institutions pursuing various versions of Weak and Strong AI.
About a year before introducing the bill, the 116th Congress held a session on the ethical implications of artificial intelligence.
And yet, the world continues to move forward...
SkyNet arises almost like any protagonist. People try to create a thing. It grows beyond its creator's capabilities and turns upon it. Society revolts and defends itself. Depending on the story, the evil villain either defeats, or is defeated by, the oppressed. We've heard stories like this all our lives. Some people never learned from them or took precautions to prevent their occurrence.
Sky Net Espionage
General Michael Hayden, once director of the CIA & NSA, admitted at a Johns Hopkins symposium that the United States killed foreigners via drone strikes utilizing metadata. The General assured guests of that symposium that American citizens were not victims of said drone strikes.
How did the NSA & CIA get their targets? Skynet gave it to them.
Skynet was a narrow AI-type program that fed upon dozens of millions of cell phone metadata in Pakistan. The reference article in the Guardian entitled, "Death by drone strike, dished out by an algorithm", also notes that Skynet focuses upon Afghanistan and Yemen.
NSA-Skynet reportedly works by analyzing the captured metadata and comparing it to search criteria. If your metadata resembles the metadata of a "terrorist," you're a target, and the days you have left on Earth may be numbered.
Like the COMPAS program of Wisconsin, a human decides to target a drone strike based upon the Skynet algorithm's recommendations.
The Dance of Death?
I'll be the first to admit that the dancing is cute and fun to watch. It's fun until I recall the other things we're teaching robots to perform.
From training AI to pilot fighter jets, algorithms designating human targets, and algorithms leading a blind judicial system to sentence people by secretive guidelines, I'm not worried at all.
Love and the Unknown
We love to use technology to improve humanity. I honestly believe this to be true. There are, unfortunately, other things that I feel society equally loves to achieve, namely:
- Convenient Love
Sometimes, our conflicts of interest blind us to the consequences of our actions. Perhaps what we, as a society, want instead to achieve everything at once without the consequences that may come from getting everything we want. We love to have things.
Perhaps we wish these things because we're insane at some fundamental level. It's almost as if the love of pursuing the achievement of a thing shutdown certain parts of our brain.
Image by Gerd Altmann from Pixabay
Thank you for joining me on yet another STEM-related article. As always, thank you for reading and following on throughout my Hive journey.
Special thanks go out to @stayoutoftherz, who first pointed out the differences between weak AI, strong AI, and John Searle.
Posted with STEMGeeks