Re: We All Deserve to Die
Noah Millman has a characteristically deep post about the justice of the death penalty that begins with this:
There’s been a running debate between myself, Alan Jacobs, and Jim Manzi in this space, touching down in a number of posts, over whether it matters (pragmatically) whether people believe human beings have a unique and transcendent value (whether we call that value “human dignity” or a “right to life” or a consequence of being “children of God” or what-you-will).
I’ve spent some time trying to sort through my reactions, and let me do so with a(nother) hypothetical.
Imagine that a large team of AI researchers builds several thousand small, battery-powered, wheeled, box-shaped robots. They write software that governs the motion of these robots. This software has various rules like “If another robot gets within X feet, then move in direction Y at speed Z”. The numerical values of the parameters X, Y and Z are set uniquely for each robot using a pseudo-random number generator. The actual set of rules is very, very long, and no one programmer fully comprehends it. The only way to see how these robots will act is to put them together and watch what happens.
The researchers scatter them around an enclosed football field and activate them. Generally they start moving around. Because of the parameter values selected for its code, robot number 1837 begins to smash into other robots at a high rate of speed and destroy them.
In sub-case 1, numerous other robots, observing this with their embedded sensors, and operating according to the software that governs their motion, simultaneously move towards this robot and ram it hard enough to destroy it. Then, these robots resume motion much like what it had been prior to this event.
In sub-case 2, numerous other robots, observing this with their embedded sensors, and operating according to the software that governs their motion, simultaneously move towards this robot and surround it. They remain there indefinitely, which prevents robot 1837 from moving.
Is it a meaningful question to ask “under what conditions are the robots justified in executing sub-case 1 or sub-case 2”? Is it meaningful to ask whether robot 1837 has done anything “wrong”? Does morality, duty, fairness or anything like that describe the behavior of any of the robots? Has a decision been taken or will exercised by any of these robots?
If the answer to these questions is ‘no’, then what distinguishes humans, if we are merely complex machines, from these robots in a way that makes any of these concepts relevant to us?
“Is it meaningful to ask whether robot 1837 has done anything ‘wrong’?”
It seems like it probably is meaningful IF you’re a robot. There’s certainly a third-person perspective from which it’s all just a matter of the physics of the universe playing itself out. (Being able temporarily to adopt that perspective is one of the great glories of our species.) But we have available to us another, internal, intentional perspective, from which ideas like “fairness” and “duty” have meaning. This perspective takes consciousness, mind, and the existence of selves as its basic units, and it maps out the universe (physical and otherwise) along very different lines from those natural to a physicalist approach. I don’t see any great reason to reject that second perspective merely because we know it isn’t the only one.
— Christopher M · Jun 26, 07:48 PM · #
If the answer to these questions is ‘no’, then what distinguishes humans, if we are merely complex machines, from these robots in a way that makes any of these concepts relevant to us?
Moral concepts are, for the most part, reifications of impressions, and these impressions are, indubitably, subjective experiences of neurological states. Thus, moral concepts capture something about what it’s like to be human being, and only thereby are they relevant to humans.
The problem, however, is that a personal experience of this nature is not intersubjective, is dissimilar, in essence, from an external stimulus like light which can strike the sensory apparatus of several individuals at the same time and deliver exactly the same information. Thus, we can speak about “injustice” to others, and be confident this concept will be understood, because we all have the innate cognitive mechanisms necessary to produce what has been called the “affecting perception of injustice.” What we can’t be confident of, what we know is practically impossible to attain, is a universal conception of injustice.
It’s a thin reed on which to base a political argument, perhaps, since that kind of reasoning ultimately depends on incommunicable subjective intuitions whose causes are concealed to the conscious mind. Nevertheless, concepts like ‘morally justified’ are relevant to human beings; they connote an experience — the experience of “perceiving” something to be morally justified — which every human brain (barring outliers) is capable of producing.
— JA · Jun 26, 07:58 PM · #
If you assume that the robots’ instructions evolve over time based on observations of and communications with the other robots, then it might be in most or all of the robots’ interest to have a concept of “right” and “wrong” for some instructions, whether or not that concept has an external foundation.
On the other hand, if all the robots’ instructions are fixed at the beginning of the game, then I agree that right and wrong isn’t that meaningful unless the observer brings some concept to the data. (I.e., it may be in our collective interest for me to have a moral reaction to the robots’ conduct because it affects my future behavior in generally beneficial ways).
— J Mann · Jun 26, 08:21 PM · #
Christopher M:
You are imbuing the robot with consciousness by saying “IF you are a robot”. At that that point the robot is no longer a robot in the meaning of the hypothetical. One could as easily say, “yes, a big stone has committed a morally balmeworthy action when the wind blew it and it rolled down a hill and crushed a pebble, IF you are a stone.”
JA:
Moral concepts are, for the most part, reifications of impressions, and these impressions are, indubitably, subjective experiences of neurological states. Thus, moral concepts capture something about what it’s like to be human being, and only thereby are they relevant to humans.
If restated in simple language, would this mean something like “just abaout everybody feels bad when they knock down an old lady, and this feeling (which is caused by a biochemcal reaction) is real, therefore morality is real”?
But if so, as you hint at in your subsequent paragraphs, this is no more “morality” than a prudential rule like “don’t put your hand on a hot stove”. I haven’t done something “wrong” when I push the old lady down, I’ve merely done something that doesn’t feel good (for most people most of the time).
— Jim Manzi · Jun 26, 08:30 PM · #
J Mann:
If you assume that the robots’ instructions evolve over time based on observations of and communications with the other robots, then it might be in most or all of the robots’ interest to have a concept of “right” and “wrong” for some instructions, whether or not that concept has an external foundation.
I didn’t want to over-burden the hypothetical, but I thought a lot about this issue. I think that if you define “in the robots’ interest” as surivival of this population over time, it is certainly true that if you have some kind of meta-alogrithm, like a GA, operating to create new rules, that somehting lke tit-for-tat, for example, would likely develop. Of course, recognize that the meta-algorithm is just more C++ or Java code developed by the programmers, so even this is just more emergent outcome of the code. Further recognize, that unexpected tit-for-tat behavior could have emerged even with a fixed set of unchangng rules, or even could have been explicitly and purposely embedded by the programmers. It doesn’t seem to me that this gets around the problem.
— Jim Manzi · Jun 26, 08:40 PM · #
Surely, no punishment can be just if it increases the risk of greater harm to the victim. The death penalty for child rapists creates an incentive for the rapist to murder his victim. Once he has committed the rape, he knows that he has committed a crime for which there is no greater punishment; so the logical course of action is for him to murder his victim and reduce the likelihood of being successfully prosecuted (as his victim won’t be able to identify the assailant, or prosecute against him.) Even apart from my general, first-principles opposition to the death penalty I can’t condone a system with such potentially perverse consequences.
— Freddie · Jun 26, 09:15 PM · #
Jim,
Yeah, sorry for the abstruse language. It’s hard communicating this stuff when you have to avoid all the meta-ethical landmines lying around (or, as in my case, when your thoughts themselves are unclear).
For instance, I purposefully avoided reducing the experience of a moral judgment to a “feeling” or “attitude” — using impression, experience, perception and almost used qualia — because I think emotivism (both Ayer’s and Hare’s) is too narrow and vulnerable. But yeah, in normal everyday language, saying “that’s wrong” is much closer to saying “that feels wrong” than it is to a descriptive, intersubjective observation statement like “that is blue.” (And here I’m using Quine’s language). “Blue” is something we see, but it is based on a property of light that is entirely independent of us, a property of light itself. Not so for a term like “wrong action”, which is meaningless when dislodged from perspective and purpose. Saying something is “inherently wrong” is to be confused, disingenuous or both.
But while these “feelings” are blunt instruments — hammers and chisels and not an etcher’s needle, to borrow a phrase from Kant — they are not entirely unmoored from reality; and that’s the other, less-appreciated part of moral statements. They arose, and exist, to solve the recurring problems of a particular type of selfish-gene — i.e., a selfish-gene that is programmed to build human beings who then build and maintain societies. That’s why moral judgments are, first and foremost, about agents and their intentional actions; other agents are, obviously, significant environmental factors for other agents; therefore, a short-hand way of computing an agent’s social-and-therefore-genetic value would clearly contribute to the gene-carrier’s overall fitness. Thus, as Haidt argues, moral judgments can be roughly divided into four “families”: 1) other-condemning: contempt, anger, and disgust; 2) self-conscious: shame, embarrassment, guilt (to which I would include moral preening); 3) other-suffering: compassion; 4) other-praising: gratitude and elevation.
To come at this from another way, when you knock down an old lady, not only have you done something that doesn’t feel good, you’ve also registered real, objective, measurable information on your environment that other sentients can then receive and assimilate into their cognitive apparatus. Dispositions can change, behaviors can change; access to food, to mates, to the protection of the tribe can now be in jeopardy.
Therefore, visibly acting like an evil bastard (which most brains conflate with the simpler “acting like an evil bastard”) is the same type of self-destructive behavior as shitting where you eat, at least as far as the selfish-gene is concerned. And just as we have an innate aversion to bodily effluvia and contaminated food, we have an innate aversion to socially inappropriate behavior.
— JA · Jun 26, 09:38 PM · #
You are imbuing the robot with consciousness by saying “IF you are a robot”. At that that point the robot is no longer a robot in the meaning of the hypothetical.
It seems the answers to your hypotheticals flow fairly straightforwardly from this premise. So:
Is it a meaningful question to ask “under what conditions are the robots justified in executing sub-case 1 or sub-case 2”? Is it meaningful to ask whether robot 1837 has done anything “wrong”? Does morality, duty, fairness or anything like that describe the behavior of any of the robots? Has a decision been taken or will exercised by any of these robots?
No, no, no, and no.
If the answer to these questions is ‘no’, then what distinguishes humans, if we are merely complex machines, from these robots in a way that makes any of these concepts relevant to us?
We are conscious, the robots are not.
We have strong intuitions about the behavior of robots, but those intuitions break down when we start making the definition of “robot” broad enough to encompass being enough like us to make those intuitions relevant. You could alternatively deny the premise that we are “merely complex machines”, but that’s just a much bigger can of worms entirely.
— Trevor · Jun 26, 09:44 PM · #
Agh! That should be “encompass beings enough like us”
— Trevor · Jun 26, 09:48 PM · #
Let’s refine the model. Say that the robots have a widely-distributed “reason” algorithm (for comparing different coded responses) which, when applied to problems of self government, routinely produces more optimized outcomes than the “faith” and “absurd hypothetical” algorithms. Further assume that when the robots run the coded responses in sub-case 1 and sub-case 2 all the way through the “reason” algorithm, sub-case 2 reliably outperformed sub-case 1.
Would you be at all surprised to find, in that case, that when we dug into the operating systems of the robots in sub-case 1, they weren’t running the “reason” algorithm through to completion? Wouldn’t it be likely, in fact, that those sub-case 1 Execution Robots were defaulting out to the “faith” or “absurd hypothetical” algorithms when they sensed that the reason algorithm was poised to produce a result that was disfavored in the operating system code?
(I may have defaulted out to my “snark” algorithm.)
— southpaw · Jun 26, 10:21 PM · #
Trevor:
I think that the the idea of consciousness is at the root of this discussion, but asserting this word as the distinction between us and the robots just kicks the can down the road a bit. Consciousness is a hard problem. What is it? Where does it come from? How is or could it be related to transendence – that is, is it the same thing to “deny the premise that we are merely complex machines” as it is to say that we are conscious?
— Jim Manzi · Jun 26, 10:40 PM · #
JA:
To come at this from another way, when you knock down an old lady, not only have you done something that doesn’t feel good, you’ve also registered real, objective, measurable information on your environment that other sentients can then receive and assimilate into their cognitive apparatus. Dispositions can change, behaviors can change; access to food, to mates, to the protection of the tribe can now be in jeopardy.
Check, but isn’t this also true for the robots?
— Jim Manzi · Jun 26, 10:42 PM · #
Check, but isn’t this also true for the robots?
There’s no reason why it couldn’t be true for them as well, yes. In fact, with an interested designer revisiting the robots, it’s highly likely they would outpace us in this regard.
— JA · Jun 27, 01:12 AM · #
JA:
I agree that’s possible. But how does this mean the robots have any more intenionality and moral agency? In other words, what if the designers wrote a simple 100-line program that made some robots chrage straight at any other robot that was caught in a fire in order to push the other robot out of the way, in spite of the fact that the robot that just did that would be destroyed. Would we say that this robot was altruitic or “acted” in any way?
— Jim Manzi · Jun 27, 05:06 AM · #
“Would we say that this robot was altruistic or “acted” in any way?”
We would, if the robot was so complex that we didn’t conceive of it as deterministic.
Not sure how much this adds, but altruism is not a distinctly human trait. There’s been a fair amount of research in evolutionary biology on reciprocal altruism (doing something for the benefit of another organism without any immediate return). They’ve managed to document the behavior in a number of non-human species. Vampire bats, for instance, will regurgitate food other bats who haven’t managed to find food. In the laboratory, scientists have been able to induce reciprocal altruism.
“In this study, Norway rats received help gaining food from a partner who pulled a stick to produce the food. Rats could therefore be grouped into two classes: those that had previously received help and those that had not. The rats who had previously been helped were then more likely to help another unknown partner receive food. This simple mechanism may promote the evolution of cooperation among unfamiliar non-relatives in many other animals.”
(http://www.sciencedaily.com/releases/2007/07/070703173345.htm)
For what it’s worth . . .
— southpaw · Jun 27, 05:59 AM · #
“Vampire bats, for instance, will regurgitate food for other bats who haven’t managed to find food.”
whoops, fixed.
— southpaw · Jun 27, 06:01 AM · #
I’ve restrained myself from commenting on the various death penalty posts up till now.
My reason for supporting the death penalty is very simple, yet nobody seems to address it or to have thought about it. It could be summarized thus: it is a more heinous and unjust act to take away somebody’s freedom than to take his life.
Every society lives according to a set of shared values and the great thing about a democracy is that the society itself chooses the values by which it lives. The law, and especially penal law, is therefore a reflection of the hierarchy of values that the society adopts.
I believe in any free society, life should be sacred, but should be placed below freedom on the ladder of values the society celebrates. My country has lacked freedom several times over its history and I am grateful that my forefathers chose their belief in freedom over their personal safety for my sake. 18th century Americans have also faced this choice and I think we are all grateful for the outcome.
To throw someone in prison for life is one of the most stomach-churning things I can think of. It is not just to take away all his freedom, but indeed, any hope that he will ever be free. It is to take away his dignity.
To abolish the death penalty and replace it with life imprisonment is essentially saying: “Taking someone’s life is something so horrible we should never do it. But taking someone’s freedom, well, that’s okay.” I think this is backwards. And for a society that is democratic, and should therefore celebrate freedom in its laws, I think that is dangerous.
When someone commits a heinous crime, and society decides what threshold to put on the cruelty of the punishment, I would hope that they put that threshold somewhere above taking his life, and somewhere below taking his freedom and dignity.
I’m not sure where that fits in Noah’s taxonomy of arguments in favor of the death penalty. But it’s mine.
— PEG · Jun 27, 09:04 AM · #
Jim, thanks for the response. Here’s what I was thinking in a little more detail.
When you ask whether it’s meaningful to ask whether it is “wrong” for the robots to destroy or contain the rogue robot, I guess the first question is who is applying the concept of “wrong” – the robots as actors, or us or observers? Either way, I think it is meaningful.
1) Robots: On the one hand, you may be asking – Should the robots have a conception of “right” and “wrong” about their actions? From the robots’ perspective, it may be beneficial to the robots as a species for the robots to have a concept that translates to “wrong.” By this I don’t mean just tit for tat – I mean societal taboos that apply even when the actor isn’t likely to get caught. Human societies operate much much better when people have a strongly ingrained belief that hard work is moral, that graft and corruption is immoral. As a society, I think we are all better off if our fellow citizens return wallets they find on the street because it’s right.
Therefore, if the robots moral codes evolve and are based in part on observations of and communications with the other robots, then the robots themselves may be better off if they can develop some social taboos or memes that go beyond situational calculations.
2) Observers: Alternately, you may be asking – Should we, as observers, have a feeling of right and wrong about the robots’ actions. I think my reasoning still applies. I and my descendants are at least arguably better off if we live in a society that has some concepts of right and wrong that are ingrained. I want you and the other readers of this comment stream to be the kind of people who return wallets, don’t lie or cheat even when you can get away with it, and keep your promises even if no one would know that you broke them, and I want you to transmit those values to everyone you can. Moral judgments about robots, or fictional characters, or even fictional robots, are one way that we strengthen and communicate our own moral values, which then have tangible effects on our lives and the lives of others.
— J Mann · Jun 27, 01:52 PM · #
How is any of this robot theorizing even slightly relevant when there are functionally interchangeable corrective actions available to us that don’t necessitate destruction? Your hypothetical attempts to succeed by confusing the destruction of a malefactor with the neutralization of the threat it poses. Those are not necessarily the same thing.
And PEG: that’s just ludicrous. If you feel you’re doing death row inmates a kindness by killing them instead of imprisoning them for life, why not extend your generosity and strongly-stated belief in freedom and simply provide them the choice of life in prison or death? Somehow, I suspect this won’t appeal to you.
— Tom · Jun 27, 02:24 PM · #
But how does this mean the robots have any more intentionality and moral agency?
I suppose this is where we admit that it’s all still a mystery. I have hunches about how agency could arise from neurological activity — self-reification in neocortical world-modeling leading to the dueling mirrors effect of self-referential systems, plus our executive controls over attention and action, plus our long-term memory, plus x,y, and z — but it’s all basically cribbed from other people’s work in this field, and not anywhere close to being a robust theory. For all I know agency could arise via Penrose-ian quantum effects by taking the analog information of a single neurological pattern and “shoving” it up into superposition — though I doubt it.
I do think, however, that AI will continue looking more functionally biological as it moves forward. It also has a great chance of surpassing neurons in the performance department. Of course, this is your bread and butter and you know far more than I about it.
So, yeah, I do think we have an “intelligible agency”, enough to rest a vague, to-be-filled-in-later theory of responsibility, etc. But that’s only half of it, really — the Ourworld half, the Archimedean-point, looking-at-the-world-objectively half. The other side, the out-of-my-own-damn-eyes Myworld side, is much as you wrote in Noah’s thread.
— JA · Jun 27, 03:13 PM · #
Oh, and let me take a minute to say how much I appreciate you taking the time to respond to all us commenters. Very bravo.
— JA · Jun 27, 03:20 PM · #
PEG: Imprisoning someone doesn’t remove their freedom, it restricts their freedom. Inmates still have some freedoms. Plus, people can leave prison in the case of a faulty trial, new evidence, rehabilitation, etc. Obviously death is much more final and removes many more freedoms from the convicted than incarceration.
— bcg · Jun 27, 07:36 PM · #
Consciousness is a hard problem. What is it?
I will not even pretend to be able to answer that.
— Trevor · Jun 28, 04:01 AM · #