An international team of researchers headed by Manuel Alfonseca of Universidad Aut ́onoma de Madrid in Spain has published a paper arguing that a hypothetical artificial intelligence solution may grow to be more than what we humans can control. Of course, at this point the "super intelligence" is just a hypothetical as the authors of the article pose.
The article is predictably short on actual calculations - as is to be expected when most metrics are unknown - but it is nonetheless quite interesting. It is no longer purely the stuff of science fiction - the dystopia of a world being taken over by machines - but a very real prognosis of a machine running artificial intelligence capable of out-calculating our own. Which I think may be a possibility indeed.
However, I see one way to safeguard ourselves against possible excesses associated with such technology. In short, I think we need a relatively low tech "kill switch" capable of deactivating any of the technologies involved. Of course, it is antithetical to the idea of a fully integrated grid society (sort of like the surveillance grid in China) but if a switch exists, if there is always a capability to go from the high tech to the low tech (local supplies, local communications, local logistics, hand pumps at fuel depots and local generators, kill switches to literally cut power to various elements of the AI grid) humans still get to stay in control if it ever comes to blows with the AI, so to speak. Interestingly, this idea matches quite well with the notion of local power, regional independence and similar libertarian ideas.
Researchers Say It'll Be Impossible to Control a Super-Intelligent AI
David Nield, Science Alert, 18 September 2022
Superintelligence Cannot be Contained: Lessons from Computability Theory
Manuel Alfonseca, Manuel Cebrian, Antonio Fernandez Anta, Lorenzo Coviello, Andrés Abeliuk and Iyad Rahwan, Journal of Artificial Intelligence Research, January 2021