A new scientific study from Oxford University examines 12 ways that civilization could end in the next 100 years and applies probabilities to each one where possible.
This isn’t a bunch of sci-fi writers sitting at a bar, knocking back shots, and coming up with the most creative way the world will end. The study was conducted by the Future of Humanity Institute, which is described as “a multidisciplinary research institute at the University of Oxford” that “enables a select set of leading intellects to bring the tools of mathematics, philosophy, and science to bear on big-picture questions about humanity and its prospects.”
The report itself says: “This is a scientific assessment about the possibility of oblivion, certainly, but even more it is a call for action based on the assumption that humanity is able to rise to challenges and turn them into opportunities. We are confronted with possibly the greatest challenge ever and our response needs to match this through global collaboration in new and innovative ways.”
There is, of course, room for debate about risks that are included or left out of the list. I would have added an intense blast of radiation from space, either a super-eruption from the sun or a gamma-ray burst from an exploding star in our region of the galaxy. And I would have included a sci-fi-style threat from an alien civilisation either invading or, more likely, sending a catastrophically destabilising message from an extrasolar planet. Both are, I suspect, more probable than a supervolcano.
The choice of end times scenarios is interesting:
A few of the existential threats are “exogenic”, arising from events beyond our control, such as asteroid impact. Most emerge from human economic and technological development. Three (synthetic biology, nanotechnology and artificial intelligence) result from dual-use technologies, which promise great benefits for society, including reducing other risks such as climate change and pandemics — but could go horribly wrong.
Do scientists at the institute believe we are more likely to destroy ourselves than Doomsday occurring as the result of phenomena from space? I might take issue with that theory simply because, if anything, we have proven over the last 75 years or so that we are capable of managing the man-made threats to our existence. With clean air and clean water regulations fairly common the world over, it doesn’t appear that we’re going to poison ourselves. And the nuclear war scenario has — so far — been offset by self-preservation among nuclear powers. No one has been stupid enough to launch a nuclear weapon thinking someone wouldn’t launch one back at them.
Are we smart enough to manage these threats?
AI is the most discussed apocalyptic threat at the moment. But no one knows whether there is a real risk of extreme machine intelligence taking over the world and sweeping humans out of their way. The study team therefore gives a very wide probability estimate.
Bad global governance
This category covers mismanagement of global affairs so serious that it is the primary cause of civilisation collapse (rather than a secondary response to other disasters). One example would be the emergence of an utterly incompetent and corrupt global dictatorship. The probability is impossible to estimate.
Extreme climate change
Conventional modelling of climate change induced by human activity (adding carbon dioxide to the atmosphere) has focused on the most likely outcome: global warming by up to 4C. But there is a risk that feedback loops, such as the release of methane from Arctic permafrost, could produce an increase of 6C or more. Mass deaths through starvation and social unrest could then lead to a collapse of civilisation.
Probability: 0.01%Synthetic biologyGenetic engineering of new super-organisms could be enormously beneficial for humanity. But it might go horribly wrong, with the emergence and release, accidentally or through an act of war, of an engineered pathogen targeting humans or a crucial part of the global ecosystem. The impact could be even worse than any conceivable natural pandemic.
Ultra-precise manufacturing on an atomic scale could create materials with wonderful new properties but they could also be used in frightening new weapons. There is also the “grey goo” scenario of self-replicating nanomachines taking over the planet.
Quick — someone tell Al Gore, and Drs. Mann and Hanson that there’s a 0.001% chance of us all dying as a result of global warming. Not that it would matter to them.
Reality is always more difficult to predict than it might appear. “Bad global governance” includes governments building up astronomical amounts of debt to fund their welfare states and then — collapse. Money wouldn’t be worth the paper it’s printed on and the resulting social unrest would destroy civilization. Some would put the likelihood of that scenario at better than 50-50. Governments have shown so far that they are completely incapable of addressing their debt problems, even when catastrophe stares them in the face. Greece is just the tip of the iceberg.
As for the concern over artificial intelligence and other technological threats, I agree that they are extremely unlikely to come about. Those who believe the threat is significant fail to take into account the “boiling frog” fallacy where we just sit and do nothing as the machines get smarter and smarter. AI that evolves to the point that could threaten humanity will not be a bolt from the blue; it is far more likely to be a gradual improvement of computer capabilities that we will be able to manage just fine.
I agree with the author that the scientists should have included the alien invasion scenario. After all, what’s a list of possible Doomsday events without the prospect of little green men coming to earth to kill us all? The list just doesn’t seem complete without it.