Anyone who has read histories of the Cold War, including the Cuban Missile Crisis and the 1983 nuclear false alarm, must be struck by how incredibly close humanity has come to wreaking incredible destruction on itself. Nuclear war was the first technology humans created that was truly capable of causing such harm, but the list of potential threats is growing, from artificial pandemics to runaway super-powerful artificial intelligence. In response, today’s guest Martin Rees and others founded the Cambridge Centre for the Study of Existential Risk. We talk about what the major risks are, and how we can best reason about very tiny probabilities multiplied by truly awful consequences. In the second part of the episode we start talking about what humanity might become, as well as the prospect of life elsewhere in the universe, and that was so much fun that we just kept going.
Support Mindscape on Patreon.
Lord Martin Rees, Baron of Ludlow, received his Ph.D. in physics from University of Cambridge. He is currently Emeritus Professor of Cosmology and Astrophysics at the University of Cambridge, as well as Astronomer Royal of the United Kingdom. He was formerly Master of Trinity College and President of the Royal Society. Among his many awards are the Heineman Prize for Astrophysics, the Gruber Prize in Cosmology, the Crafoord Prize, the Michael Faraday Prize, the Templeton Prize, the Isaac Newton Medal, the Dirac Medal, and the British Order of Merit. He is a co-founder of the Centre for the Study of Existential Risk.
Web pageInstitute for Astronomy, Cambridge, web pageGoogle Scholar publicationsAmazon.com author pageWikipediaCentre for the Study of Existential Risk
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Anyone who has read histories of the Cold War, including the Cuban Missile Crisis and the 1983 nuclear false alarm, must be struck by how incredibly close humanity has come to wreaking incredible destruction on itself. Nuclear war was the first technology humans created that was truly capable of causing such harm, but the list of potential threats is growing, from artificial pandemics to runaway super-powerful artificial intelligence. In response, today’s guest Martin Rees and others founded the Cambridge Centre for the Study of Existential Risk. We talk about what the major risks are, and how we can best reason about very tiny probabilities multiplied by truly awful consequences. In the second part of the episode we start talking about what humanity might become, as well as the prospect of life elsewhere in the universe, and that was so much fun that we just kept going.
Support Mindscape on Patreon.
Lord Martin Rees, Baron of Ludlow, received his Ph.D. in physics from University of Cambridge. He is currently Emeritus Professor of Cosmology and Astrophysics at the University of Cambridge, as well as Astronomer Royal of the United Kingdom. He was formerly Master of Trinity College and President of the Royal Society. Among his many awards are the Heineman Prize for Astrophysics, the Gruber Prize in Cosmology, the Crafoord Prize, the Michael Faraday Prize, the Templeton Prize, the Isaac Newton Medal, the Dirac Medal, and the British Order of Merit. He is a co-founder of the Centre for the Study of Existential Risk.
Web pageInstitute for Astronomy, Cambridge, web pageGoogle Scholar publicationsAmazon.com author pageWikipediaCentre for the Study of Existential Risk
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Kliv in i en oändlig värld av stories
Svenska
Sverige