Sunday, 1 February 2015
Why Aren't Existential Risk / Ultimate Harm Argument Advocates All Attending Mass?
An increasingly popular genre in the sort of applied philosophy and ethics of technology, which does not so much engage with actual technological development as more or less wild phantasies about possibly forthcoming ones is the notions of "existential risks" or "ultimate harms", or similar expressions. The theme is currently inspiring several research environments at world-leading universities, such as this one and this one (where you can find many links to other sources, articles, blog posts, and so on), and given quite a bit of space in recent scholarly literature on a topic often referred to as the ethics of emerging technology. Now, personally and academically, as it has actually proceeded, I have found much of this development being to a large extent a case of the emperor's new clothes. The fact that there are possible threats to human civilizations, the existence of humanity, life on earth or, at least, extended human well-being, is not exactly news, is it? Neither is there any kind of new insight that some of these are created by humans themselves. Also, it is not any sort of recent revelation that established moral ideas, or theories of rational decision making, may provide reason for avoiding or mitigating such threats. Rather, both these theses follow rather trivially from a great many well-established ethical and philosophical theories, and are well-known to do so since hundreds of years. Still, piece after piece is being produced in the existential risk genre making this out as some sort of recent finding, and exposing grand gestures at proving the point against more or less clearly defined straw-men.
At the same time, quite a bit of what is currently written on the topic strikes me as philosophically shallow. For instance, the notion that the eradication of the human species has to be a bad thing seems to be far from obvious from a philosophical point of view - this would depend on such things as the source of the value of specifically human existence, the manner of the imagined extinction (it certainly does not have to involve any sort of carnage or catastrophe), and what might possibly come instead of humanity or currently known life when extinct and how that is to be valued. Similarly, it is a very common step in the typical existential risk line to jump rather immediately from the proposition of such a risk to the suggestion that substantial (indeed, massive) resources should be spent on its prevention, mitigation or management. This goes for everything from imagined large scale geo-engineering solutions to environmental problems, dreams of outer space migration, to so-called human enhancement to adapt people to be able to handle otherwise massive threats in a better way. At the same time, the advocates of the existential risk line of thought also urges caution in the application of new hitherto unexplored technology, such as synthetic biology or (if it ever comes to appear) "real" A.I. and android technology. However, also there, the angle of analysis is often restricted to this very call, typically ignoring the already since long ongoing debates in the ethics of technology, bioethics, environmental ethics, et cetera, where the issue of how much of and what sort of such caution may be warranted in light of various good aspects of different the technologies considered. And, to be frank, this simplification seems to be the only thing that is special with the existential risk argument advocacy: the idea that the mere possibility of a catastrophic scenario justifies substantial sacrifices, without having to complicate things by pondering alternative uses of resources.
Now, this kind of argument, is (or should be) well-known to anyone with a philosophical education, since it seems to share the basic form of the philosophical classic known as Pascal's Wager. In this argument, French enlightenment philosopher and mathematician, Blaise Pascal offered a "proof" of the rationality of believing in God (the sort of God found in abrahamitic monotheistic religion, that is), based on the possible consequences of belief or non-belief, given the truth or falsity of the belief. You can explore the details of Pascal's argument, but the basic idea is that in the face of the immense consequences of belief and non-belief if God exists (eternal salvation vs. eternal damnation), it is rational to bet on the existence of God, no matter what theoretical or other evidence for the truth of this belief exists and no matter the probability of this truth. It seems to me that the typical existential risk argument advocacy subscribes to a very similar logic. For instance, the standard line to defend that resources should be spent on probing and (maybe) facilitating), e.g., possible extraterrestial migration for humanity, seems to have the following form:
1) Technology T might possibly prevent/mitigate existential risk, E
2) It would be really, really, very, very bad if E was to be actualised
3) Therefore: If E was otherwise to be actualised, it would be really, really, very, very good if E was prevented
4) Therefore: If E was otherwise to be actualised, it would be really, really, very, very good if we had access to a workable T
5) Therefore: there are good reasons to spend substantial resources on probing and (maybe, if that turns out to be possible) facilitating a workable T
That is, what drives the argument is the (mere) possibility of a massively significant outcome, and the (mere) possibility of a way to prevent that particular outcome, thus doing masses of good. Now, I'm sure that everyone can see that this argument is far from obviously valid, even if we ignore the question of whether or not premise 2 is true, and this goes for Pascal's Wager too in parallel ways. For instance, the existential risk argument above seems to ignore that there seems to be an innumerable amount of thus (merely) possible existential risk scenarios, as well as innumerable (merely) possibly workable technologies that might help to prevent or mitigate each of these, and it is unlikely (to say the least) that we have resources to bet substantially on them all, unless we spread them so thin that this action becomes meaningless. Similarly, there are innumerable possible versions of the god that lures you with threats and promises of damnation and salvation, and what that particular god may demand in return, often implying a ban on meeting a competing deity's demands, so the wager doesn't seem to tell you to try to start believing in any particular of all these (merely) possible gods. Likewise, the argument above ignores completely the (rather high) likelihood that the mobilised resources will be mostly wasted, and that, therefore, there are substantial opportunity costs attached to not using these resources to use better proven strategies with better identified threats and problems (say, preventing global poverty) - albeit maybe not as massive as the outcomes in the existential risk scenarios. Similarly, Pascal's Wager completely ignores all the good things one needs to give up to meet the demands of the god promising eternal salvation in return (for instance, spending your Sundays working for the allieviation of global poverty). None of that is worth any consideration, the idea seems to be, in light of the massive stakes of the existential risk / religious belief or non-belief scenarios.
Now, I will not pick any quarrel with the existential risk argument as such on these grounds, although I do think that more developed ways to analyse risk-scenarios and the ethical implications of these already in existence and used in the fields I referred above will mean lots of troubles for the simplistic aspects already mentioned. What I do want to point to, however, is this: If you're impressed by the existential risk argument, you should be equally impressed by Pascal's Wager. Thus, in accordance with Pascal's recommendation that authentic religious belief can be gradually installed via the practice of rituals, you should – as should indeed the existential risk argument advocates themselves – spend your Sundays celebrating mass (or any other sort ritual demanded by the God you bet on). I very much doubt, however, that you (or they) in fact do that, or even accept the conclusion that you (or they) should be doing that.
Why on earth is that?
Subscribe to:
Post Comments (Atom)
Well, a crucial disanalogy your missing is that for some technological risks, it's possible to have a reasonable sense of how likely they are (think: nuclear war, for example) and how much of a risk they represent, and this makes X-risk very unlikely the case where there are infinitely many gods all of whom seem about equi-probable on current evidence (assuming that's true). It's this that makes the project seem totally quixotic and impossible in the gods case. (This is not an endorsement of the view that any of the people your attacking here, are actually informed enough about the relevant technologies to do the kind of work they want to do. I take no stand on that.)
ReplyDeleteAt least, if the latter isn't what's driving things, then you're argument basically amounts to 'if you like the idea that how much of your resources you should invest in preventing an outcome is a function of how bad the outcome is, as well as the chance of it happening absent you doing anything, and the chances of it happening, given your acting to prevent it, and you think that sometimes this can justify attempting to prevent *really* bad outcomes, even when they're quite unlikely to happen anyway, and your intervention is relatively unlikely to make a difference, then you should like Pascal's Wager.' That *may* be true, but if so, then, to say the least, it's a problem for far more than those people who take the current X-risk literature seriously.
Stirling, Andrew (2007) Risk, precaution and science: towards a more constructive policy debate: talking point on the precautionary principle. EMBO Reports, 8 (4). pp. 309-315. ISSN 1469-221X
ReplyDeleteStirling's work is nice, although not so far-going on the ethical/philosohical stuff at the base of these issues - more in terms of making sense of precautionary thinking at a policy level. There's a growing body of literature addressing the harder underlying issues though. I refer to some of it in this recent encyclopedia article: http://onlinelibrary.wiley.com/doi/10.1002/9781444367072.wbiee550/abstract
ReplyDelete