Curating the Internet: Science and technology digest for November 27, 2019
An argument that AIs should receive the same ethical protections as animals; IBM's project debater AI teams with humans and debates against itself at Cambridge University; Speculation that Virgin Galactic may be pursuing point to point travel between locations on Earth; A chess piece found in Jordan may be the oldest known; and a Steem essay offering tips for selecting a secure laptop
Straight from my RSS feed | Whatever gets my attention |
Links and micro-summaries from my 1000+ daily headlines. I filter them so you don't have to.
- AIs should have the same ethical protections as animals - In this essay, John Basl and Eric Schwitzgebel argue that because artificial intelligence (AI) systems will soon be as cognitively sophisticated as dogs or mice, it is time to start seriously thinking about rights for AI systems. Comparing the research field to biology, the authors point out that medical and biological research has protections in place to determine what research can be conducted with animals, but AI will soon face some of the same ethical challenges without any equivalent forms of scrutiny. The essay further argues that it's not possible to limit protections just to conscious AIs because consciousness isn't well-enough understood to be reliably identified in AI systems, especially because AI consciousness might be very different from other forms of consciousness that we're familiar with. Finally, the essay notes that ethical standards for humans and animals were developed in the wake of abusive research, but that we have a chance to do better this time, by avoiding the abuses before they happen, and proposes the founding of ethical committees to conduct oversight. Here are the concluding sentences:
It is likely that such committees will judge all current AI research permissible. On most mainstream theories of consciousness, we are not yet creating AI with conscious experiences meriting ethical consideration. But we might – possibly soon – cross that crucial ethical line. We should be prepared for this.
h/t Communications of the ACM - IBM's AI debating machine debated itself on whether AI is good or evil. Its creators say that could help human learning. - In a debate at Cambridge University, IBM's artificial intelligence software assisted human debaters on both sides of the motion, "This House Believes AI Will Bring More Harm Than Good". Noam Slonim, from the research team, says that this technology may eventually have uses in government and in businesses. The AI's previous debates have been one-on-one against human opponents. This is the first time that the AI has teamed with humans or argued against itself. For training, the AI received 1,100 anonymous arguments that were collected from the Cambridge Union community. According to the article, the project debater AI managed to include a joke in its arguments, but it also had problems with repetition, keeping up with human arguers, and it failed to provide details in support of its arguments.
Here is a video of the debate:
Related: IBM's Project Debater was previously covered in Interesting Links: March 22, 2019
In order to help bring Steem's content to a new audience, if you think this post was informative, please consider sharing it through your other social media accounts.
And to help make Steem the go to place for timely information on diverse topics, I invite you to discuss any of these links in the comments and/or your own response post.
Beneficiaries
- Burn Steem/SBD - @null - 5%
- Cited author(s) - @twr - 10%
- Fundraising for the Rustin Golden Knights Marching Band - @rgkmb-unofficial - 10%
- Posting and/or scheduling service (steempeak.com) - @steempeak - 5%
- Steem/API services (anyx.io) - anyx - 5%
- Steem/RSS services (steemrss.com) - torrey.blog - 5%
- SteemWorld (steemworld.org) support - steemchiller - 5%
About this series
Sharing a link does not imply endorsement or agreement, and I receive no incentives for sharing from any of the content creators.
Follow on steem: @remlaps-lite, @remlaps
If you are not on Steem yet, you can follow through RSS: remlaps-lite, remlaps.
Thanks to SteemRSS from philipkoon, doriitamar, and torrey.blog for the Steem RSS feeds!
Hello,
Your post has been manually curated by a @stem.curate curator.
We are dedicated to supporting great content, like yours on the STEMGeeks tribe.
If you like what we are doing, please show your support as well by following our Steem Auto curation trail.
Please join us on discord.
#posh https://twitter.com/SharesSteem/status/1199736133493698564
Wake me up when that happens. I vehemently disagree extant technology approaches anything of the kind, and the practically infantile grasp of what consciousness even is mirrors our brutally simplistic technology compared to natural living things. The most advance AI today isn't even a fraction of the complexity and advanced integration of a single cell. Far from dogs and mice, extant AI is so far below even the independent capacity of Archaea, relics from the earliest expression of life on Earth, that the attainment of relative parity remains presently inconceivable. I don't think I've read a better example of hubris in science.
I do not foresee a global totalitarian jurisdiction eventuating anytime in the foreseeable future competent to prevent research from being permitted outside their jurisdiction, or despite it.
The next article reveals just how crude AI remains at even parsing debate.
The last article seems to not acknowledge factory backdoors, and how Chrome facilitates surveillance by Goolag, but I'll have a read at the source, and comment further there. I note the nearly existential potential of biometric ID to harm individuals when malicious actors gain that data. India has inadvertantly revealed their biometric ID system has caused enormous suffering when victims very biometric data was used to profit bad actors.
Thanks!
I had similar thoughts about the first article. My reaction was stronger at the beginning, when I thought they were suggesting legal protections, but it tapered off when I realized that they seemed to be mostly referring to institutional standards.
I agree about global enforcement for AI ethical standards. We see something similar already with some of the genetic research coming out of places like China.
I was also sort-of surprised by the inclusion of chromebook in the list in the last article, just because of Google's overall data-grabiness, but I'm not very familiar with the chromebook, so I didn't reach any conclusions. Good point about the extreme danger from compromise of biometric data, too.
Thanks for the comment!