Here's another game it's useful to learn how to play if you want to write near-future science fiction: spot the Existential Threat.
An existential threat (for purposes of this thought experiment) is some phenomenon or activity — it may be natural or may be human-contrived — that threatens, in ascending order of threatliness, the survival of (1) technological civilization, (2) the human species itself, (3) life on Earth, or (4) the universe. It may also be qualified by the probability of it happening annually. Obviously, a type 3 event that occurs on average once every ten billion years is nothing to lose sleep over (that's the estimated life expectancy of our planet), while a type 1 event with a probability of 10% per annum is definitely worrying.
Before I invite you to join in and supply some of your own, here are some pointers:
The classic example of a human-contrived type 1 existential threat was (and remains) a super-power level nuclear war. While human civilization could survive a limited nuclear exchange (for example, between India and Pakistan, involving less than 250 warheads), a full-scale US/Soviet war fought between 1965 and 1985 would have involved between 10,000 and 50,000 nuclear warheads, most of them thermonuclear, and almost the entire developed world would have been reduced to rubble. In some regions, the consequences of such a war would have approximated a type 2 threat: the UK, for example, discontinued civil defence in the late 1970s after estimates of the surviving population after 6 months dropped to 2% of the pre-war population.
I'm not attaching a probability to this particular threat, but we came horrifyingly close on several occasions over the past 60 years (the time of greatest danger probably being late 1983, during the Able Archer/Operation RYAN crisis).
Other speculative type 1 human-contrived threats include: massive anthropogenic climate change (80% of the human population lives within 200 miles of a coastline, so rising sea levels would result in really serious resettlement issues), a catastrophic failure of the global financial system coinciding with energy price instability, and so on.
There are natural type 1 existential threats, too. A Carrington event such as the coronal mass ejection of August 28th to September 2nd, 1859, would wreak havoc with electricity grids worldwide, destroying high voltage transformers and blacking out very large parts of the globe: it could well take years to rebuild infrastructure after such a magnetic storm, and our current dependence on just-in-time logistics means that we might not be able to weather the inevitable shortages.
Again, a supervolcano eruption probably qualifies as a type 1 threat — or possibly a type 2 threat, depending on magnitude. The Yellowstone caldera erupts roughly once per 600,000 years, and produces an ash fall on the order of a thousand times greater than the Mount St Helens eruption of 1980; one supereruption resulted in ash deposits over 30cm deep more than 1500km away from the caldera. The last eruption, 640,000 years ago, blasted around 1000 cubic kilometres of rubble and dust into the sky: the effects on insolation would be global, resulting in worldwide crop failures for many years after such an event. Yellowstone is not the only active supervolcano on Earth, and an eruption of any one of them is likely to have drastic global effects.
I suspect there would be human survivors from even a major supereruption — but such an event would probably mark the end of global civilization and possibly reduce the survivors to early iron-age techniques for a while.
Type 2 human-contrived threats are somewhat rarer, insofar as nobody is likely to do such a thing deliberately — Tom Clancy fantasies aside, nobody actually wants to render the human species extinct. [WARNING: commenters who contradict me on this will be asked to provide formal references or shut the hell up. A lot of groups assert that proponents of rival ideologies want to do horrible things, but on examination this almost always turns out to be propaganda.]
We are probably capable of intentionally exterminating humanity. One method was discovered more or less by accident by Australian vaccine researchers in 1996: we now know (more or less: I don't think anyone's been crazy enough to test it) how to create a genetically modified strain of smallpox likely to be highly contagious and have 100% mortality in humans (by using the gene coding for interleukin-4 to suppress cellular immunity in its victims). Other, more expensive methods might include using gravity tugs to tug 200 metre near-earth asteroids towards the Earth, rather than away from it — presumably to one-up the annoying neighbour bragging about their 100 megaton H-bomb.
But deliberate human-origin type 2 threats aren't that interesting, because they presuppose the presence of a moustachio-twirling super-villain who wants to kill everyone, including themselves, and who can convince enough followers to join in to make a decent fist of the project: this is, on the face of it, unlikely.
What are the accidental type 2 existential threats we might create over the next century?
I'm not going to discuss climate change: it's too obvious, and besides, it's a third rail for intelligent discourse on this blog. Nor am I going to discuss the prospects of a hard take-off hostile singularity: you've all seen the Terminator movies, right? And the grey goo scenario is somewhat discredited these days.
In fact, I'm going to eschew anything on the Wikipedia list of risks to civilization, humans, and planet Earth and look for something new.
One possibility is a population implosion. While the convention wisdom asserts that we're in the middle of a population explosion, the reality is quite different; developing countries almost invariably pass through a demographic transition, as female education and emancipation, and reductions in child mortality, result in smaller families. Today, most of the developed world has a total fertility rate of under 2.0, where 2.1 is around the level required to maintain a stable population in the long term.
If half your children die before age 5, and you expect to rely on them for support in your old age, you might well want a big family for practical reasons. But if they all survive and there's a state pension system, the work and expense involved in raising them is a significant deterrent. Interestingly, human parents seem to judge their desired family size by reference to their neighbours. If the average family size is 6 children, then having an extra 1 or 2 is not significant — but if the average family has 1-2 children, having 3 or 4 marks one out as somewhat abnormal. Thus, once the demographic transition has occurred, it may actually be quite difficult to raise the TFR again.
We're currently living through an "overshoot", as TFR drops like a stone world-wide but the people born during previous baby booms remain alive. Over the next century, barring breakthroughs in anti-ageing treatment, we can expect our demographic balance to shift significantly towards old age, and then a purely natural population crash.
The risk of a type 2 threat emerges if it turns out that maintaining our technological civilization requires a certain minimum work force and number of specialities that cannot be maintained by a much smaller population, following such a crash: at this point things get hard to predict, but if we're diverting lots of labour into supporting the elderly and infirm, the young aren't going to have the energy or inclination to raise large families.
Mitigating factors would include the development of nursing robotics or medical treatments for old age. Oh, and there's some early evidence pointing to a possible stage six demographic transition in which TFR in developed countries rebounds towards 2.0 (in which case we're probably in a metastable system and there's nothing to worry about in the long term). But it's still an issue that bears consideration.
Other contributory factors to an extinction-level population crash would be the development of a real-life experience machine as per Nozick, or of other technologies capable of delivering supernormal stimuli (as implicitly described in my novel Saturn's Children). Nozick's philosophical experience machine was designed as a thought experiment to undermine ethical hedonism; but running such an experiment for real on a stressed population that is already in a demographic spiral would be extremely risky.
Anyway, it's your turn now.
Can you think of a new, plausible end-of-humanity scenarios that doesn't feature in this list? (Please provide reasoning and footnotes ...)