I think one should differentiate - especially in the STEM community - between weak AI and strong AI. In this post were described largely examples of weak AI, but the interesting part is about where we are in development of a strong AI. Here is the biggest potential but also the biggest danger! When is singularity to be expected and once it is achieved, how we should treat strong AIs? And how they will see us - as our precious ancestors or as pathetic meatballs?
You are viewing a single comment's thread from: