Showing posts with label artificial intelligence. Show all posts
Showing posts with label artificial intelligence. Show all posts

Thursday, 1 December 2016

New article on the possibility of machines as moral agents online



One of the real perks of supervising PhD students is that they tend to force you outside your established academic comfort zone, and explore new territories of philosophical inquiry. This is what has happened when I have followed the footsteps and eventually joined the work of Dorna Behdadi, who is pursuing a PhD project in our practical philosophy group on the theme of Moral Agency in Animals and Machines. She has led the work on a new paper, where I am co-authour, that is now available online as a so-called preprint, while it is being considered for publication by a scientific journal. The title of the paper is "Artificial Moral Agency: Reviewing Philosophical Assumptions and Methodological Challenges", and deals with the notion of machines or any artificial entity (possibly a very advanced not yet existing one) could ever be ascribed agency of a moral sort, that might imply moral wrongdoing, responsibility for such wrongdoing (by the machine), or similar things.

Tts abstract runs thus:

The emerging field of "machine ethics" has raised the issue of the moral agency and responsibility of artificial entities, like computers and robots, under the heading of "artificial moral agents" (AMA). We analyze the philosophical assumptions at play in this debate and conclude that it is characterized by a rather pronounced conceptual and/or terminological confusion. Mostly, this confusion regards how central concepts and expressions (like agency, autonomy, responsibility, free will, rationality, consciousness) are (assumed to be) related to each other. This, in turn, creates a lack of basis for assessing either to what extent proposed positions and arguments are compatible or not, or whether or not they at all address the same issue. Furthermore, we argue that the AMA debate would benefit from assessing some underlying methodological issues, for instance, regarding the relationship between conceptual, epistemic, pragmatic and ethical reasons and positions. Lastly, because this debate has some family resemblance to debates on the moral status of various kinds of beings, the AMA discussion needs to acknowledge that there exists a challenge of demarcation regarding what kind of entities that can and should be ascribed moral agency.
 The paper can be viewed and downloaded for free here and here.

***

Sunday, 30 September 2012

Are Drones more Advanced than Human Brains?




'What??', you may rightfully ask, has the philosopher joined the club of positive futurists that he word-whipped so badly recently? How could the US distance-controlled search and destroy flying units popularly known as "drones" ever be compared to the complexity of the wiring or functionality of a real brain? Especially so since said drones evidently fail massively (see also here) to do what they are supposed to. Not meaning that I find the activities of humans in military operations much more tasteful, mind you – just so that we can put that little debate to a side for now.

But it's not me, folks! It's no smaller an intellectual giant than the very President of Yemen, Abed Rabbo Mansour Hadi, elected by a massive majority as the sole candidate in 2012, who says so – or seems to be saying so, according to the Washington Post (reported also in my own country here, here) Yes, that's right, the very same Yemen where the activities of drones have recently been heavily criticised for inefficiency, inhumanity and political counterproductivity (see also here). What he says more precisely, in response to the exposure of the increased use of drones in Yemen, is this:

Every operation, before taking place, they take permission from the president /.../ The drone technologically is more advanced than the human brain.

Now, it is not my place here to criticise the decisions of the president as such, I'm sure there are more than one political delicacy for him to consider in these matters. However, since he seems to be basing his decision at least partly on the above assessment of the capacities of drones, there seems to be a tiny bit here for the philosopher to have a word about. Simply put: are there any reasons to hold true what he says about drones and brains?

I'm sure that your initial reaction is the same as mine was: obviously not! The laughingly narrow computational, sensory and behavioral capacity of a drone to be comparable to the immensely complex biological wiring of the human brain and its sensory and nervous system, capable of so much more than merely killing people - come off it! So, why not just say that? you may wonder. Because, on further inspection, I changed my mind, I confess that the just stated is indeed one interpretation of what the president says, but it is far from the only one and even less the most reasonable one.

Consider again the comparison made in  the quote above.

Note, for instance, that it is done between a part of humans (their brain) and the whole of the drone. Human brains are in fact not capable of doing much unless assisted by the rest of the human body. This in contrast to a drone, that includes not only its computer and sensory mechanisms, but a whole lot of mechanics as well. This makes the drone capable of, e.g., flying and bombing, which the human brain as such is clearly not capable of.

You may retort that the brain may feel and think much better about more things than the drone computer (plus sensors), but that's also a simplification. For sure, a drone is probably a much too simple machine to be ascribed anything like beliefs or feelings (or any sort of sentiment or attitude beyond purely behavioral dispositions of the same kind that can be ascribed to any inanimate object). But we also know that a computer has a capacity for computation and quantitative data processing far beyond any human with regard to complexity and speed. So when it comes to getting a well-defined type of task done, the drone computer and sensors may very well do much better than any single or group of human brains.

That something like this is the intended meaning of the statement is actually hinted at by the use of the qualifier "technologically". One interpretation of that could perhaps be the same as synthetic or manufactured, in which case, the statement would become trivially true, but also empty of interesting information: we already knew that brains are not artifacts, didn't we? But the word "technology" may also signify something else than the distinction between natural and artificial, it may rather signify the idea of technology as any type of use of any type of instrument for the realisation of human plans. In effect, the qualitative comparison between drones and human brains has to be done relative to the assumed goals of a specific plan. In this case, I suppose, that of killing certain people while avoiding to kill certain other people. This, of course, opens the issue of whether one should attempt to kill anybody at all, but it is rather obvious that the president does not signal that question to be open for debate in spite of the fact that pondering it would be a task where a human brain would for sure be vastly superior to a drone.

A pretty boring retort at this stage could be to point to the fact that if it hadn't been for human brains, there wouldn't be any drones. One could add, perhaps, that the operation of drones takes place with active guidance and operation of humans (including their brains). But surely, what the president is getting at is how things would have gone had humans tried to carry out whatever orders they are trying to carry out without access to the drones.

And, plausibly, this is what the president means and claims: that humans using drones get more of those people killed that are supposed to be (according to given orders) killed and less of those that are not supposed to be killed compared to if human soldiers or fighter planes had been used.  The statement carries no deeper ramifications for cognitive science or philosophy, except perhaps that our celebration about the capacities of the human mind and brain tend to become less obvious and looking more self-serving when taken down from more general and unspecific levels.

It is, of course, an empirical question whether or not the claim about the greater efficiency (relative to some particular set of orders or goals) is correct or not (as seen there are some doubts expressed in the Washington Post stories), but it is not an a priori obvious falsehood. To assess it would, however, require access not only to body count data and such, but also the precise content of said orders with regard to e.g. accepted degrees of collateral killings, losses to own troups (guaranteed to stay at zero when using drones) and so on. Which, of course, will not be forthcoming. The dear president Hadi can say whatever he wants about the relative capacities of drones and brains and never be faulted.

For my own part, I cannot but remember the rendering (from the book The Man who Knew too Much) of a response by Alan Turing in a radio debate on artificial intelligence in the 1950's to the challenge that no computer could ever compose sonnets (or any other poem, one supposes) of similar quality to those of Shakespeare. Turing said that while it was possibly true that computer poems would not be enjoyable for humans, he rather thought that a computer could be able to compose poems of great enjoyment to other computers. If anyone has a more exact reference of this, I would be happy to receive it.