I love these mental exercises about the what-ifs of a coming AI takeover of the productive activities of society. After having read Kurzweil's works, I'm of the same opinion as him, that we will augment ourselves before a true synthetic AI arrives. By the time we get one, telling the difference between a person who was formerly 100% biological and a person with digital origins will be difficult. That said, part of my opinion is biased by my desire for humankind to continue existing. I think that if super AI beings come into existence, and they're not at least partially us, we won't last too much longer. It's a small chance that they'll be benign. I mean, we aren't towards other species beneath us on the intelligence spectrum. Why would a hyper intelligent AI treat us any different? That is, unless we become the very thing that could threaten to eradicate us.
You are viewing a single comment's thread from: