Curating the Internet: Science and technology micro-summaries for September 18, 2019

avatar
(Edited)
Authored by @remlaps

The justification behind the holographic principle; A changing paradigm for cosmology; AI learned to use tools by playing "hide and seek"; An argument that the health risks for 5G technology merit further study, but should not prevent deployment; and a Steem essay describing the systemic consensus (SK) process for group decision making


Fresh and Informative Content Daily: Welcome to my little corner of the blockchain

Straight from my RSS feed
Whatever gets my attention

Links and micro-summaries from my 1000+ daily headlines. I filter them so you don't have to.


image.png

pixabay license: source.

  1. Why do some scientists believe that our universe is a hologram? - I think this post by Sabine Hossenfelder is the best non-technical description I've seen for the justification behind the "holographic principle", which is the claim that our universe is a projection of a 2-dimensional surface, not the 3-dimensional construct that we think we perceive. In short, she says that the idea is untestable with current technologies, but it appeals to some scientists because it is consistent with the information in a black hole - which maps to the event horizon, and also with a theoretical construct called an an Anti-de Sitter space. She closes the post saying that she finds the idea unpersuasive, but thinks it is interesting. The link contains an embedded youtube video and a written transcript.

    Here is the video:

  2. The Universe Is Not in a Box - In this conversation with edge.org , Julian Barbour discusses his research. He compares his "big idea" to the emergence of thermodynamics out of a simple six page engineering paper on optimizing steam engines. He said that the big difference between the problem of optimizing steam engine efficiency and the field of thermodynamics is that the steam engine considered energy transfer inside of a box, whereas thermodynamics took away the container. He suggests that there's a similar paradigm shift that needs to be made in cosmology, and discusses his own efforts to lead the shift. These efforts include work with his collaborators to distill Einstein's general relativity down to just its essential components, which he believes have to do with the shape of the Universe, and also the addition of something he calls "Janus points" - boundary points in spacetime where the direction of time reverses - to explain why time seems to be unidirectional to every observer. Barbour contends that after further development his ideas may either refine the predominant theory of inflation or may lead to a competing theory. As with most edge.org conversations, the link contains a video and a transcript.

  3. AI learned to use tools after nearly 500 million games of hide and seek - OpenAI researchers are working to see what happens if you expose AI to environment that mimics the evolutionary environment for early life forms. To do this, they are incorporating two ideas: multi-agent parallel learning, and a spectrum encompassing competition and coordination. In a new paper, they describe their results from accomplishing this in a virtual game of hide and seek. In these trials, two opposing teams played 500 million games, during which they learned to deploy complex strategies and counter-strategies that included tool use and cooperation. Strategies that the "hiders" developed included moving and locking blocks in the virtual environment, and handing blocks to team-mates. Meanwhile, "seekers" learned to use ramps and climb over walls that had been built by the "hiders". One of the authors, Bowen Baker, suggested that this demonstration of unguided intelligence shows that AI can be cultivated as an ermergent behavior, represents a promising research area for AI improvement, and even has the potential to provide solutions to problems that are currently unsolved.

  4. 5G Is Coming: How Worried Should We Be about the Health Risks? - The article acknowledges widespread public concern about increased and involuntary exposure to radio-frequency (RF) energy, but says that most scientists who have studied the question have found little risk, especially with the marginal change that's associated with transforming from 4G to 5G. A number of studies have uncovered biological effects from RF energy, but few have linked those effects to health risks. However, the article does note that some scientists have been very vocal raising red-flags, and adds that as the technology evolves, the question will always remain open for additional study. On a personal level, after my mom died from brain cancer, RF exposure is something that has worried me a bit. However, I don't worry about the cell towers or boosting stations. Because the strength of radio waves decays at an inverse-squared rate compared to distance, I only worry about devices that are in very close proximity. As a precaution I only use a wired head-set or put the phone on "speaker" mode, and I generally don't keep my phone on my person.

  5. STEEM GROUPTHINK - a systemic view on group decisions and consensus - In this Steem post, @erh.germany describes the systemic consensus (SK) method for group decision making. The post points out that this method is preferable to seeking unanimity because large groups that strive for unanimity are ultimately paralyzed by indecision or dominated by corrupt alliances. In the SK method, instead of using a simple vote, group members rate each choice on a scale from one to ten, then scores are calculated, and the option with the least resistance is chosen. The article expresses three questions for an individual to determine whether to participate in the SK process: (i) Do I feel that the outcome will affect me?; (ii) Do I feel that I will be accountable for the outcome of the decision?; and (iii) Do I feel that I will be part of doing the outcome of the decision? The more questions that a person answers "yes" to, the stronger the incentive to participate.


My other open posts

(as of Tuesday afternoon)
@remlaps

@remlaps-lite

Fundraising for the Rustin Golden Knights Marching Band by @rgkmb-unofficial


In order to help make Steem the go to place for timely information on diverse topics, I invite you to discuss any of these links in the comments and/or your own response post.

Beneficiaries


About this series


Sharing a link does not imply endorsement or agreement, and I receive no incentives for sharing from any of the content creators.

Follow on steem: @remlaps-lite, @remlaps
If you are not on Steem yet, you can follow through RSS: remlaps-lite, remlaps.


Thanks to SteemRSS from philipkoon, doriitamar, and torrey.blog for the Steem RSS feeds!



0
0
0.000
7 comments
avatar

Great links!

I am also not convinced of the holographic principle, as it seems to based on a flawed understanding of entropy. Information isn't lost in a black hole. It's just no longer available to observers outside the black hole. We grasp very little of what this actually means to physics, and failing to reckon our own limitations on handling information seems to be why this theory is propounded.

Regarding emergent AI, I am convinced it is useful to bear in mind how evolution has progressed on Earth, with apparently increasing cooperation developing from previously insensitive systems. During emergence, radically insensitive mechanisms should be expected to but gradually gain sensitivity and holistic response. For example, death evolved prior to the radiation of organisms that have produced extant ecosystems, because it is evident in all known lifeforms. We have a common ancestor that had evolved death, shared with all extant organisms.

Ironically, species that had not evolved the death of individual organisms have all died out. This feature of life is dramatically counterintuitive, yet ubiquitous in practice. It seems upon reflection that death potentiates evolution, and thus development, but it is notable that extant institutions are highly likely to consider their death (however relevant to the field of endeavor that may be) to be absolutely imperative to prevent.

We all want to live forever, yet emergent life is mortal. We are prone to introducing not only such bias but an unknown range of possibilities equally as inconceivable to us as programming mortality into our DNA.

This seems to me to recommend extreme caution in the assessment of AI functionality, particularly in existentially relevant matters. Given the high complexity of real world systems, the fields applicable to the AI in question may take extremely large amounts of development before nominal responsiveness to not just their field of applicability, but to the whole system of systems, upon which the state of the applicable field depends. You might think that 500M plays is a lot, but life has been emerging and developing responsivity to the real world for over 4bya, and 500mya only takes us back to the Cambrian explosion, which is but ~12% of the total extant and demonstrably has resulted from dramatic evolution of cooperative pathways applicable holistically. Clearly the experimenters crafted baseline teams and cooperative behaviour, but the enormous complexity of real world systems leaves a lot of room for bias and hubris, particularly, to prevent nominally holistically responsive actions in the AI that may only be relevant in highly particular circumstances.

Such shortcomings are therefore likely to effect unexpected and potentially harmful actions by an AI so limited in experience, and therefore development that reflects the hard limits on developer capacity. We take shortcuts, and yet butterflies do wreak hurricanes with the gestalt that emerges from their sum and the rest of the whole.

In brief, the power of AI to control important systems needs to be limited until hypercomplexity of development is demonstrably beneficial and not harmful in exigent and novel circumstances. Even then, such actions comparable to extinction level events must be expected, and able to be mitigated in emergent systems.

We don't want warlord, health, or financial AI basing it's deployments based on poorly conceived baseline underpinnings, amongst developments likely to be forthcoming in AI applications. Consider that death is predicated on the development of species, and how these features relate is inexplicable, perhaps unavoidably. Just inserting our biases are likely to result in unexpected and potentially harmful consequences, particularly existential and those apparent only in specific and unique or rare circumstances.

Thanks!

0
0
0.000
avatar

Good feedback. Thanks! The point you made about species that don't incorporate individual deaths (planned obsolescence) in their design all being extinct is an interesting one that I haven't heard before. That puts a new light on the modern research into reversing aging, too. And I agree that the power of AI needs to be limited for the foreseeable future, especially in high impact realms like health, military, and finance.

I don't have much of an opinion one way or another on the holographic universe. I think it's interesting to think about, but unless or until they can prove the claim, I guess the default has to be that the 3-dimensions that we think we perceive are all really there.

0
0
0.000
avatar

What I particularly don't like with the first entry, is that the author has very strong opinion about what should be work on and what should be ignored (see the last paragraph of her post). This is in my own opinion very dangerous. One should on the contrary be pragmatic and try everything. This is how we learn things. And the beauty is that what is interesting for one individual may be boring for another, and vice versa.

Personally, this is quite far from what I do (I am not a string theorist ;) ).

For the other selection, I do not have much to comment (I discussed already a little in the post by @erh.germany concerning the 5th one).

0
0
0.000
avatar

She does have strong opinions. And not just on this topic. I only recently started following her blog, and I'm not quite sure what I think yet, but the blog content is often interesting and I like the idea behind the scimeter.org initiative that she helped to launch.

0
0
0.000
avatar

I managed to make my own opinion with the time fleeting. She has very good points on a few things. But not always. And in the latter case, it turns out I mostly strongly disagree with the message she conveys.

0
0
0.000
avatar


This post has been voted on by the SteemSTEM curation team and voting trail. It is elligible for support from @curie and @minnowbooster.

If you appreciate the work we are doing, then consider supporting our witness @stem.witness. Additional witness support to the curie witness would be appreciated as well.

For additional information please join us on the SteemSTEM discord and to get to know the rest of the community!

Thanks for having included @steemstem in the list of beneficiaries of this post. This granted you a stronger support from SteemSTEM. Note that using the steemstem.io app could have yielded an even more important support.

0
0
0.000
avatar

Thanks for sharing knowledgeable blog.

0
0
0.000