I am a programmer. I was a programmer before I was really a writer. I was actually a programmer before any of the things I consider myself good at. I was a decent Dungeon Master before I became a programmer but that only predates my first experience programming by a year or so. I was VERY young.
My first computer had 2K RAM. To put that into context this post would likely not have fit into its memory. Imagine trying to make simple games within such a constraint. That is where I began my adventures in programming.
I then moved onto 16K, and then 64K which was my main area of operation for many years, and later my computers kept rapidly being built by myself with more and more RAM. The machine I am currently writing this on has 32GB of RAM which strangely enough I don't feel the need for more at the moment. Some of the servers I build and deploy for work have 96GB of RAM. When I was younger we constantly were running into barriers and limitations due to running out of RAM.
This lead us to design things as compact and efficient as we could. Later on I'd begin exploring the internet and we had modems which were measured in BAUD rate. My slowest modem was a 300 BAUD modem and I can type faster than it can display information.
In that era I was heavily focused on determining how to do big things within the limits of the BAUD. As time progressed BAUD as a limitation started to fade and this made things like HTML very successful because while it is not compact and efficient in terms of space it is very easy for humans to read and use.
One of my passions was AI. I was particularly fond of trying to mimic spoken intelligence. Consider it early aspects of things like Alexa, Google, Siri, etc. I was actually inspired by things like Eliza, and Racter.
My earliest attempt was actually quite the stupid and simple program I called "The Cutdown Machine". I was probably 13 or 14 when I wrote it. That'd put it in the 1983 or 1984 time frame. I remember losing cutdown battles with some of my "friends" who sometimes were not that verbally friendly.
The program was stupid and had absolutely no intelligence. It would take the things I said to it, or my friends said to it and stick them in an array of strings. It would then randomly slam different strings together and spit out a response. It was often gibberish and complete nonsense. Sometimes it was amazingly funny though and would have us crying we were laughing so hard. I also found that I could mimic that approach myself after that and come up with some pretty original and funny cut downs. I didn't even have to use profanity to do it.
That was in no way intelligent. It wasn't even what we call an expert system. Yet by sheer luck of randomness sometimes it could appear to be so.
I would read about Eliza in a Scientific American, Omni, or some other magazine of the time. Where I lived I tended to know quite a bit more about programming than any available teachers. It was also pretty remote so there were no BBS systems that could be reached without paying expensive long distance phone calls. I lived what other people were doing through magazines and occasional PBS documentaries on television and such.
I'd hear about things, or read about them and if I wanted to see what they did I had to attempt to make them myself.
Mimicing human speech like Eliza does in the quest towards a program that can pass the Turing Test (computer which when spoke to through communication devices is indistinguishable from humans) ended up being basic or complex. There were many approaches.
I quickly realized it was not truly aware of anything. It was programmed to react to certain words, and certain word positions. If you designed it well you could actually fool people for quite some time with just this technique.
I started wondering about ways to make something aware of things when the sentence structures varied. I played around with basic logic and imbuing them with logic. Things like telling it "A Mammal is an animal". "A mammal has hair". "A human is an animal". Then being able to ask it "Does a human have hair?" to which with the information I provided it would say "I Don't Know" or some other sentence I designed to act like a person who doesn't know something. Perhaps I might make it try to change the subject. However, if I told it that "a human is a mammal" and then asked it "Does a human have hair?" it would respond yes, because I previously taught it that a mammal had hair.
The first time I wrote this I did so in a BASIC programming language on an Amiga 1000 which did not have support for recursion. I thus had to mimic it to some degree for traversing the logic tree. I decided to put a limit to how deep it would search. If something was far enough removed in the number of hops to get the answer then it might still say "I don't know".
I tweaked this a bit and had it so if you asked a question such as "Does a human have hair?" and it had to traverse to the second level to find out a mammal has hair then it would automatically create a link for itself "Humans have hair" on the first tier and thus shift the knowledge more to the top of what it knew about humans.
This worked well... It especially worked well when I decided to make it dream. If you didn't talk to it for awhile it'd go into dream mode and grab words and ask itself questions about other words. If the questions lead to something several levels down in the recursion the dreaming would create a level 1 link and thus it actually kind of learned while it was dreaming. I could come back and ask it more complicated questions about things very far removed and it usually would have an answer.
Was it alive? No. Was it self aware? No. Could it operate outside of what I programmed it to do? No. Could it fool people? Yes, sometimes... Was it Artificial Intelligence? No. It was not intelligent. It was simply what is called an Expert System. A system I designed within rules (aka algorithms) for how to do very specific things.
It could give an illusion.
Back then we didn't have something like google, the web, etc. If we had I could totally imagine turning something like my system lose on the internet and letting it read web pages and learn new logical connections all on its own.
Yet even with that it'd still be trapped within the confines of the rules. It could potentially answer questions. It might be a good tool. Yet it would not be intelligent.
I was also completely aware of the concept of an algorithm. This is a word that people use to cause people to stop questioning. They use it to try to shift the blame. They use it to justify actions. They count on the fact that most people are completely ignorant on what it truly means...
I would like to dispel that.
Algorithms are essentially recipes. They are a set of steps to follow to accomplish a specific task. Much like when you cook using a recipe, or you assemble something using an instruction manual. Algorithms will accept input that they can react to and they will provide their results in output. What is important to know is that programs are essentially just collections of algorithms.
If you create an algorithm to alphabetize names (aka sort) then you can reuse that algorithm within a program, or within other algorithms anywhere that you need something alphabetized. You build a little tool that follows instructions.
The key is that it was built by a person. A person knows what it will do. If it does something then it is very likely BY DESIGN and INTENTIONAL.
(Image Source: quora.com)
As things get more complicated there can be outcomes that were not expected. Sometimes they are good. Sometimes they are bad. When they are bad they are often called a BUG.
It can be helpful for you to imagine if it was your job to do what an algorithm does what steps you yourself would do. If you try to do that it can demystify some of the excuses that you'll encounter.
(Image Source: code.org)
Sometimes the unforeseen aspect of the algorithms or combination of algorithms will be that it is somehow exploitable. This is how security breeches and other things usually occur. It is how computer viruses tend to operate though such things often also play upon expected repetitive human behavior.
We tend to fix these exploits when their outcome is considered bad? What do we see when the exploit granted someone power, or wealth and they are in a position to protect a FIX from being implemented? Perhaps we might encounter the phrase "The Code is the Law".
Considering how I've been explaining the operation of algorithms you should really think about that for awhile.
Code is Law
This is something that is a popular umbrella to hide beneath. It tends to only become a mantra when there are people that are benefitting from exploiting or taking advantage of some aspects of the code/algorithm/program and they don't want that "fixed". They may even say it isn't an exploit. "Code is Law". Often the only advantage they have is that they noticed it before others, took advantage of it, gained power/wealth/advantage in comparison to others and they don't want it closed. If they were not first to the exploit they might be in the camp of people that see it as a flaw or a bug.
If it was explicitly stated in the design documents that this exploitation was a desirable outcome then it would indeed not be a flaw. If on the other hand it is not mentioned in the design documents then it is likely it is just an exploit, oversight, flaw, or bug.
It becomes a problem when "Code is Law" becomes the chant to protect a flaw because it has insured those who discovered it will remain in power, gain power, and potentially have gained enough power to prevent others from being able to take advantage of the flaw as they did. At this point it is about power. It is not about ethics, morality, or any interest in other people.
Sometimes the Code is Law within a system if what is occurring was explicitly described in the design documents.
I can assure you I've read white papers for some projects where what people now say "Code is Law" was not only not explicitly listed but the rest of the document had a message that seemed to indicate such a thing would be destructive to the intended goals.
In this case "Code is Law" can corrupt and destroy a project...
Furthermore, code can be changed just as laws can be changed.
"The Code did it" is usually a lame excuse. Fine. The code allowed it. Does that mean we shouldn't fix it, improve it, challenge it, ask questions, etc.?
The Code Did It
In the past half decade we've seen an awful lot of censorship, banning, etc. It has often been blamed on algorithms and people just accept that as an excuse and move on.
Algorithms are programmed by people. If they are targeting specific groups at a much higher degree of scrutiny and/or censorship then that is by design. A person or group of people had to design the algorithm to facilitate that and they had to tell it what to look for.
It is a lame excuse and it is an excuse none of you should accept. It is people preying upon the ignorance of the population when it comes to computer programming.
What about machine learning?
Programs are built from collections of algorithms. When we have machine learning we take things such as alphabetizing and we allow the program to try to randomly come up with new code to improve upon the speed of the alphabetizing. Some code will fail and be discarded. Other code will work faster and will be what it uses until it finds something else. It is kind of evolution targeting specific algorithms.
This can help with word recognition, facial recognition, and many other things. It is still just algorithms strung together. It is not awareness. It is not Artificial Intelligence.
It is expert systems mimicing intelligence and giving an illusion that they are intelligent simply because they accomplish tasks that we tend to use our intelligence to do ourselves.
Being ignorant about this subject has been weaponized and is often used by those pushing the propaganda to con the population.
They use these concepts the same way they do when they say "Experts say", "Scientists say", "Doctors say", "My priest says", etc. It is an argument from authority fallacy.
These algorithms are designed by people, and may be made more efficient through machine learning. What they choose to conceal, censor, etc. That is dictated by humans. It is NOT the fault of the algorithm.
Code is Law applied to real life...
I can pick up a rock and smash someone in the head. The code of reality allows it. Does that mean I should do it? Does that mean if I do it that it is acceptable?
(Image Source: thepoliticalinsider.com)
How is that Gun Control algorithm working out for you Chicago? Compare places with high gun control algorithms to places without... you'll see a trend. It won't be the narrative the news is pushing.
EDIT: An interesting thing about guns. They have been called the "Great Equalizer". A gun does not care how wealthy a person is. A gun does not care how big or small a person is. It is a tool that works equally for all people. Do people abuse it? Yes. They tend to be brought to justice when they do so. Places with the most guns tend to be VERY polite. Places with the most mass shootings tend to be places where the criminals (i.e. they don't care about gun laws) know that only police and others that will take awhile to respond will be armed and can stop them. Soft and easy targets. The guy that did the Pulse Night Club shooting went by several other targets and passed them due to security until stopping at the Pulse Night Club. Anecdotal? Sure until you study the actual statistics.