Re: We All Deserve to Die

Noah Millman has a characteristically deep post about the justice of the death penalty that begins with this:

There’s been a running debate between myself, Alan Jacobs, and Jim Manzi in this space, touching down in a number of posts, over whether it matters (pragmatically) whether people believe human beings have a unique and transcendent value (whether we call that value “human dignity” or a “right to life” or a consequence of being “children of God” or what-you-will).

I’ve spent some time trying to sort through my reactions, and let me do so with a(nother) hypothetical.

Imagine that a large team of AI researchers builds several thousand small, battery-powered, wheeled, box-shaped robots. They write software that governs the motion of these robots. This software has various rules like “If another robot gets within X feet, then move in direction Y at speed Z”. The numerical values of the parameters X, Y and Z are set uniquely for each robot using a pseudo-random number generator. The actual set of rules is very, very long, and no one programmer fully comprehends it. The only way to see how these robots will act is to put them together and watch what happens.

The researchers scatter them around an enclosed football field and activate them. Generally they start moving around. Because of the parameter values selected for its code, robot number 1837 begins to smash into other robots at a high rate of speed and destroy them.

In sub-case 1, numerous other robots, observing this with their embedded sensors, and operating according to the software that governs their motion, simultaneously move towards this robot and ram it hard enough to destroy it. Then, these robots resume motion much like what it had been prior to this event.

In sub-case 2, numerous other robots, observing this with their embedded sensors, and operating according to the software that governs their motion, simultaneously move towards this robot and surround it. They remain there indefinitely, which prevents robot 1837 from moving.

Is it a meaningful question to ask “under what conditions are the robots justified in executing sub-case 1 or sub-case 2”? Is it meaningful to ask whether robot 1837 has done anything “wrong”? Does morality, duty, fairness or anything like that describe the behavior of any of the robots? Has a decision been taken or will exercised by any of these robots?

If the answer to these questions is ‘no’, then what distinguishes humans, if we are merely complex machines, from these robots in a way that makes any of these concepts relevant to us?