Showing posts with label ethics of technology. Show all posts
Showing posts with label ethics of technology. Show all posts

Sunday, 1 February 2015

Why Aren't Existential Risk / Ultimate Harm Argument Advocates All Attending Mass?




An increasingly popular genre in the sort of applied philosophy and ethics of technology, which does not so much engage with actual technological development as more or less wild phantasies about possibly forthcoming ones is the notions of "existential risks" or "ultimate harms", or similar expressions. The theme is currently inspiring several research environments at world-leading universities, such as this one and this one (where you can find many links to other sources, articles, blog posts, and so on), and given quite a bit of space in recent scholarly literature on a topic often referred to as the ethics of emerging technology. Now, personally and academically, as it has actually proceeded, I have found much of this development being to a large extent a case of the emperor's new clothes. The fact that there are possible threats to human civilizations, the existence of humanity, life on earth or, at least, extended human well-being, is not exactly news, is it? Neither is there any kind of new insight that some of these are created by humans themselves. Also, it is not any sort of recent revelation that established moral ideas, or theories of rational decision making, may provide reason for avoiding or mitigating such threats. Rather, both these theses follow rather trivially from a great many well-established ethical and philosophical theories, and are well-known to do so since hundreds of years. Still, piece after piece is being produced in the existential risk genre making this out as some sort of recent finding, and exposing grand gestures at proving the point against more or less clearly defined straw-men.

At the same time, quite a bit of what is currently written on the topic strikes me as philosophically shallow. For instance, the notion that the eradication of the human species has to be a bad thing seems to be far from obvious from a philosophical point of view - this would depend on such things as the source of the value of specifically human existence, the manner of the imagined extinction (it certainly does not have to involve any sort of carnage or catastrophe), and what might possibly come instead of humanity or currently known life when extinct and how that is to be valued. Similarly, it is a very common step in the typical existential risk line to jump rather immediately from the proposition of such a risk to the suggestion that substantial (indeed, massive) resources should be spent on its prevention, mitigation or management. This goes for everything from imagined large scale geo-engineering solutions to environmental problems, dreams of outer space migration, to so-called human enhancement to adapt people to be able to handle otherwise massive threats in a better way. At the same time, the advocates of the existential risk line of thought also urges caution in the application of new hitherto unexplored technology, such as synthetic biology or (if it ever comes to appear) "real" A.I. and android technology. However, also there, the angle of analysis is often restricted to this very call, typically ignoring the already since long ongoing debates in the ethics of technology, bioethics, environmental ethics, et cetera, where the issue of how much of and what sort of such caution may be warranted in light of various good aspects of different the technologies considered. And, to be frank, this simplification seems to be the only thing that is special with the existential risk argument advocacy: the idea that the mere possibility of a catastrophic scenario justifies substantial sacrifices, without having to complicate things by pondering alternative uses of resources.



Now, this kind of argument, is (or should be) well-known to anyone with a philosophical education, since it seems to share the basic form of the philosophical classic known as Pascal's Wager. In this argument, French enlightenment philosopher and mathematician, Blaise Pascal offered a "proof" of the rationality of believing in God (the sort of God found in abrahamitic monotheistic religion, that is), based on the possible consequences of belief or non-belief, given the truth or falsity of the belief. You can explore the details of Pascal's argument, but the basic idea is that in the face of the immense consequences of belief and non-belief if God exists (eternal salvation vs. eternal damnation), it is rational to bet on the existence of God, no matter what theoretical or other evidence for the truth of this belief exists and no matter the probability of this truth. It seems to me that the typical existential risk argument advocacy subscribes to a very similar logic. For instance, the standard line to defend that resources should be spent on probing and (maybe) facilitating), e.g., possible extraterrestial migration for humanity, seems to have the following form:

1) Technology T might possibly prevent/mitigate existential risk, E

2) It would be really, really, very, very bad if E was to be actualised

3) Therefore: If E was otherwise to be actualised, it would be really, really, very, very good if E was prevented

4) Therefore: If E was otherwise to be actualised, it would be really, really, very, very good if we had access to a workable T

5) Therefore: there are good reasons to spend substantial resources on probing and (maybe, if that turns out to be possible) facilitating a workable T

That is, what drives the argument is the (mere) possibility of a massively significant outcome, and the (mere) possibility of a way to prevent that particular outcome, thus doing masses of good. Now, I'm sure that everyone can see that this argument is far from obviously valid, even if we ignore the question of whether or not premise 2 is true, and this goes for Pascal's Wager too in parallel ways. For instance, the existential risk argument above seems to ignore that there seems to be an innumerable amount of thus (merely) possible existential risk scenarios, as well as innumerable (merely) possibly workable technologies that might help to prevent or mitigate each of these, and it is unlikely (to say the least) that we have resources to bet substantially on them all, unless we spread them so thin that this action becomes meaningless. Similarly, there are innumerable possible versions of the god that lures you with threats and promises of damnation and salvation, and what that particular god may demand in return, often implying a ban on meeting a competing deity's demands, so the wager doesn't seem to tell you to try to start believing in any particular of all these (merely) possible gods. Likewise, the argument above ignores completely the (rather high) likelihood that the mobilised resources will be mostly wasted, and that, therefore, there are substantial opportunity costs attached to not using these resources to use better proven strategies with better identified threats and problems (say, preventing global poverty) - albeit maybe not as massive as the outcomes in the existential risk scenarios. Similarly, Pascal's Wager completely ignores all the good things one needs to give up to meet the demands of the god promising eternal salvation in return (for instance, spending your Sundays working for the allieviation of global poverty). None of that is worth any consideration, the idea seems to be, in light of the massive stakes of the existential risk / religious belief or non-belief scenarios.  

Now, I will not pick any quarrel with the existential risk argument as such on these grounds, although I do think that more developed ways to analyse risk-scenarios and the ethical implications of these already in existence and used in the fields I referred above will mean lots of troubles for the simplistic aspects already mentioned. What I do want to point to, however, is this: If you're impressed by the existential risk argument, you should be equally impressed by Pascal's Wager. Thus, in accordance with Pascal's recommendation that authentic religious belief can be gradually installed via the practice of rituals, you should – as should indeed the existential risk argument advocates themselves – spend your Sundays celebrating mass (or any other sort ritual demanded by the God you bet on). I very much doubt, however, that you (or they) in fact do that, or even accept the conclusion that you (or they) should be doing that.

Why on earth is that?




Thursday, 2 June 2011

Read Entire Chapt. 1 of My New Book Online for Free

Springer, who publish my new book on the ethical basis of the precautionary principle, The Price of Precaution and the Ethics of Risk, has permitted Google books to make the entire first chapter available for online reading. Here it is embedded:




And if you rather prefer that, here's a link to the Google books site. And here's a presentation of the book from a recent post, with links for sampling other chapters and look at the index.

Saturday, 21 May 2011

My Book on the Ethical Basis of the Precautionary Principle is Out!

So, some shameless self-promotion:



My book on the ethical basis of the precautionary principle, The Price of Precaution and the Ethics of Risk, is now officially released by Springer. To view the table of contents, sample substantial portions of chapters and look up names or subjects in the index, click on the button below:


 Here's the content summary in all of its glory:

Since a couple of decades, the notion of a precautionary principle plays a central and increasingly influential role in international as well as national policy and regulation regarding the environment and the use of technology. Urging society to take action in the face of potential risks of human activities in these areas, the recent focus on climate change has further sharpened the importance of this idea. However, the idea of a precautionary principle has also been problematised and criticised by scientists, scholars and policy activists, and been accused of almost every intellectual sin imaginable: unclarity, impracticality, arbitrariness and moral as well as political unsoundness. In that light, the very idea of precaution as an ideal for policy making rather comes out as a dead end. On the basis of these contrasting starting points, Christian Munthe undertakes an innovative, in-depth philosophical analysis of what the idea of a precautionary principle is and should be about. A novel theory of the ethics of imposing risks is developed and used as a foundation for defending the idea of precaution in environmental and technological policy making against its critics, while at the same time avoiding a number of identified flaws. The theory is shown to have far-reaching consequences for areas such as bio-, information- and nuclear technology, and global environmental policy in areas such as climate change. The author argues that, while the price we pay for precaution must not be too high, we have to be prepared to pay it in order to act ethically defensible. A number of practical suggestions for precautionary regulation and policy making are made on the basis of this, and some challenges to basic ethical theory as well as consumerist societies, the global political order and liberal democracy are identified

Thank you for your kind attention!