You are viewing a single comment's thread from:

RE: The value drop off on human attraction

in StemSocial3 months ago

Glad to see you posting something relevant to the stemsocial community. This is a well-formatted, nicely written, and well-argued piece of writing.

I know there have been several arguments about AI replacing humans in functional roles. However, my counter-argument has always been that humans are still the ones to build AIs. It thus means that functional/job roles will keep evolving with time and humans will keep being relevant. We have always been a step ahead of machines.

Sort:  

Thanks.

We have always been a step ahead of machines.

But this only gets us to a point until we are not. There is the obvious example of playing chess where for a long time, a human had a chance, but once it got to that point - no human will ever beat the machine again. I suspect it will start off with narrow fields such as this that will increase in complexity and lateral movement, then they will combine to solve more complex narrow issues - and then increase from there. At some point, we will not be able to work on the machines we have created as they will start working on themselves in ways we do not understand. For a while, there will be humans who will outperform the machines, but the average human won't and I think that is the biggest issue when it comes to employment.

Well, as long as the evolution of machines remains beneficial to humans, that should not be a problem. Also, remember that the power to disable machines still remains with humans, until machines become totally independent of humans, we will always be a step ahead. I don't think humans will watch on and allow machines to evolve to the extent of being independent.

Also, remember that the power to disable machines still remains with humans, until machines become totally independent of humans

I disagree here, as if you imagine that an AI on the internet now influences decision making, a general intelligence AI would be able to at a highly granular level across the globe. It wouldn't even need to attack, just manipulate. When it comes to humans will, they will do what they have always done, use their tools to maximize - but this tool could come to think for itself.

You seems sure for something that is supposed to be a hypothesis.

I am confident, not sure. When it comes to the future technology - it is uncertain what will arrive - when it comes to human behavior, we have a lot of evidence we can use to predict.