Morality and Fallibility
Peter’s criticism of that Steven Pinker essay’s politically facile conclusion is right on, but I think he’s too generous to Pinker’s entire vision of what morality is and what psychology tells us about it.
According to Pinker, evidence suggests that we are hard-wired to behave like utilitarians, complete with impulses toward altruism and maximizing mutual benefit. What corrupts and confuses this universal moral sense, he argues, is “moralization:”
Moralization is a psychological state that can be turned on and off like a switch, and when it is on, a distinctive mind-set commandeers our thinking. This is the mind-set that makes us deem actions immoral (“killing is wrong”), rather than merely disagreeable (“I hate brussels sprouts”), unfashionable (“bell-bottoms are out”) or imprudent (“don’t scratch mosquito bites”).
For Pinker, then, every question that the rest of us might consider a “moral” one is actually a cost-benefit calculation that has been skewed by emotion. For instance, in the old thought experiment involving a runaway trolley car, subjects show an “irrational” aversion to physically interposing a human in order to save several, while throwing a switch that will have the same result is much less emotionally fraught. The squeamishness we might feel about wrestling an unwilling subject into the path of an oncoming trolley is the result of our “moralizing” what is a straightforward calculation, and “our nonutilitarian intuitions come from the victory of an emotional impulse over a cost-benefit analysis.” Once we transcend our vestigial “moralizing” sensibilities, Pinker suggests, our innate and universal moral sense will kick in, and we’ll make decisions that are aligned with Pinker’s proposed objective moral order.
I suppose from Peter’s response that not everyone is as creeped out as I am by such naked utilitarian thinking. So be it. But even if your defining moral principle is the greatest good for the greatest number, shouldn’t you at least recognize and address the absence of perfect information?
Those primal aversions and injunctions that lead us to “moralize” our problems are specifically intended to deal with our own ignorance and fallibility. The Pinker-brain can cook up cost-benefit calculations all day long, but unless the input data is perfect, the consequences will be flawed. Even an evolutionary reductionist could build a model in which “irrational” impulses compensate for uncertainty, the same way altruism fosters cooperation and positive-sum outcomes.
If people could calculate utility as efficiently as Pinker suggests they can, there’s be no need for morality. We’d know exactly whom to crush with our trolley cars in exchange for whom. Meanwhile, in this fallible (and yes, “fallen”) world, some of those “irrational” impulses still come in handy.
1. You’re absolutely right, about the inability to perfectly access utility.
2. Pinker makes the common leap in confusing biological predisposition with what one should do. Like Richard Dawkins arguing that a physiological predisposition towards infidelity means that we should abandon ideas of sexual monogamy, Pinker makes a logical leap from predisposition to moral imperative. There is no particular reason whatsoever to believe that what we are predisposed to do is the same as what we ought to do. (I would suggest that the implication is that it’s unfair to ask us to act contrary to our biological impulses, but of course, that requires some sort of a judgment of what is unfair; you might call that a theistic argument, but that’s controversial.)
3. Utilitarianism, of course, is utterly incompatible with some of our most basic democratic notions, like individual rights, personal and familial commitments, and prior obligations.
— Freddie · Jan 15, 06:21 PM · #
“If people could calculate utility as efficiently as Pinker suggests they can, there’s be no need for morality.”
I think you are missing the point. Pinker is not making normative claims (or if he is he shouldn’t), he is making descriptive claims.
His argument is as follows. Studying brain-states and correlate human behaviors, we can make epistemologically sound claims that humanity has a native “moral instinct.” Further, using the same epistemologically sound methods we can (roughly) put together how this moral instinct works.
From Marc Hauser and others, it appears to be the case that instantiations of the moral instinct proceed as follows: 1) input from the environment leads to 2) a snap intuition and, simultaneously, 3) an emotion response, the admixture of which creates the sensation of 4) moral judgment.
From many, many different kinds of experiments, it has been shown that step number two is informed by some measure of cause-and-effect understanding, underscored by the intentional stance (the default neo-cortical world-modeling of human nature). Thus, time and again in experiments subjects rate intended bad acts as morally worse than those which are seemingly unintended. Another way of saying this is that the subjects experience a stronger emotional reaction if the intuition of step number 2 results in an assumption that the agent intended the bad outcome. This is relatively uncontroversial in the field, actually; it is a phenomenon found in pre-linguistic children of all ethno-cultural groups.
Notice this says nothing about the efficiency or accuracy of the intuition. In fact, the nature of the moral instinct, when combined with limited/inaccurate knowledge of cause and effect, guarantees a certain amount of error in output.
— JA · Jan 15, 06:33 PM · #
Of course, it gets even more complicated when you add in-group/out-group distinctions. If the single person on the alternate track is their child, parents won’t pull the lever to switch the tracks. And this fits common sense: a parent values one of its progeny more than five strangers. And this can be extended to more abstract in-group relationships, too.
If one is a moral philosopher, it is worth knowing this information because this will tell you which moral systems will work — i.e. which will readily map onto the prepared mindspaces of the moral instinct — and which will falter. It won’t, however, help you choose among practicable moral systems.
— JA · Jan 15, 06:47 PM · #
Studying brain-states and correlate human behaviors, we can make epistemologically sound claims that humanity has a native “moral instinct.” Further, using the same epistemologically sound methods we can (roughly) put together how this moral instinct works.
Again— why should I believe that man’s moral instincts, if they exist, are congruent with actual morality?
— Freddie · Jan 15, 08:39 PM · #
Why should I believe that man’s moral instincts, if they exist, are congruent with actual morality?
That’s begging the question somewhat. Empirical evidence has made a compelling case that the moral instinct does in fact exist as an innate output-generator of a certain type; and, except in the case of pathology, exists in all humans by way of evolution. On the other hand, no evidence exists for an “actual morality” that is independent of human cognition.
Of course, you do have “ethics” which are learned rules and orderings of human interaction. But I don’t think that is what you were saying. (?)
— JA · Jan 15, 09:33 PM · #
OK, so I’m begging the question. So then are you and Pinker. Right? If morality is what conforms to our physiologically conditioned moral sense, than what is Pinker saying? He is arguing for a morality based on a biological moral sense. But if you begin with the supposition that that is the “true” morality, then you are begging the question.
So the question remains: why should I privilege the preconditioned moral sense that you and Pinker are positing over any other moral system? Who is the arbiter of whether the biologically preconditioned moral sense is in fact moral? If the answer is merely “man”, then this moral system is no different than any other. People like Pinker are attempting to use the sense of certainty that we associate with science to leverage a particular moral vision. But I have yet to hear any compelling reason why I should believe that this biological moral sense is more useful (since you don’t like truth statements, and actually, I don’t either) than any other.
— Freddie · Jan 15, 10:07 PM · #
except in the case of pathology, exists in all humans by way of evolution.
Also, this is where so many of these types of arguments run aground for me. What is the use of this claim? Whether or not this is true is, first of all, subjective, because notions of pathology always are. To use the (overused) example, for most of human history homosexuality was considered deeply immoral and unnatural. The notion that it was not would be considered pathological. Second, what of the person who disagrees? What if his moral sense tells him that what is moral for you is not moral for him? What moral psychology becomes on balance is merely a morality of the masses, a majoritarian morality. I think most of us reject the notion that that which is considered moral by most people is in fact moral. And third and finally, what does the idea that morality is imprinted on us biologically matter, when so many people disagree about what is actually moral? Osama bin Laden believes what he has done to be profoundly moral. The great balance of the world’s suffering, it seems to me, is done by people who mean no harm. Attempts to pathologize immorality ignore the horrible history of those who did evil in the attempt to do good.
— Freddie · Jan 15, 10:33 PM · #
People like Pinker are attempting to use the sense of certainty that we associate with science to leverage a particular moral vision.
Again, I think we are speaking at cross-purposes. Pinker may be harnessing the authority of science to put forward a paradigm of morality, but I am not. I reside on the other side of the descriptive-normative divide.
Moral psychology seeks to know, in materialistic, cognitive terms, what is happening when a human being makes a moral judgment. Moral philosophy, on the other hand, seeks to flank this materialistic understanding by positing particular axioms, outcomes, and constraints on moral reasoning itself.
The language analogy is useful here. All human beings have an innate “deep grammar” which underwrites the language instinct; we all have a fundamental algorithm for computing symbols.
This is not equivalent to arguing for a particular grammar, it is a claim about all grammars. Likewise, Hauser et. al. make claims about the structural relationships of all moral judgments, or, more specifically, the grammar with which all moral judgments are made. Thus, their claims are descriptive, not normative.
If morality is what conforms to our physiologically conditioned moral sense, than what is Pinker saying? He is arguing for a morality based on a biological moral sense. But if you begin with the supposition that that is the “true” morality, then you are begging the question.
And hopefully that clarifies our differences. Whatever Pinker is saying, I say nothing about what “morality” — in a normative sense — should be based on. What I am saying is that 1) “moral judgments”, whatever their particulars, are universal phenomena of human existence; 2) these judgments arise naturally (they do not have to be learned); 3) thus, it is accurate to say that human beings have an innate “moral instinct”; 4) the description of how this moral instinct operates is a proper endeavor of science; 5) this moral instinct manifests in individuals according to a universal grammar, which we can discern via experiments and brain scans; 5) particular outcomes — i.e. particular judgments — are dependent on a highly complex web of contingents; 6) moral psychology says absolutely nothing about what judgments we should have, it only describes how we make them. (And Pinker, an evolutionary psychologist, tries to figure out why we make them — i.e., tries to figure out the ontogeny of the instinct itself.)
To end, when the towers fell on 9/11, Americans collectively judged those acts to be morally reprehensible; on the other side, many radical Muslims judged the acts to be morally laudable. Moral psychology tells us both sets of judgments were made using the same cognitive algorithms. It does not tell us which set is better. Only normative morality can do that.
— JA · Jan 15, 11:31 PM · #
OK, I understand much better what you mean now.
— Freddie · Jan 15, 11:39 PM · #
One more thing (sorry Matt!):
If we assume that the same action can be judged differently by different groups, then we get the raison d’etre of moral philosophy. In essence, a recursive moral algorithm, whose outcome is dependent on a highly complex web of variables, will result in a huge amount of heteroglossia in a population. Worse, this centrifugal tendency will be reinforced by the inscrutability of moral judgments: when reason and morality conflict, reason is cast aside. Therefore, moral philosophy (e.g. religion), by creating a unitary moral language, acts as a centripetal force on algorithmic outcome. By giving some of the cognitive variables a constant value, a consistent moral paradigm will tend to bend a population’s aggregate moral judgments toward more order, and more harmony.
— JA · Jan 15, 11:51 PM · #
I thought Pinker actually undermined an exclusively utilitarian view of morality, criticizing it for not being able to describe human action. In his words:
“The most effective way to seem generous and fair, under harsh scrutiny, is to be generous and fair. In the long run, then, reputation can be secured only by commitment. At least some agents evolve to be genuinely high-minded and self-sacrificing — they are moral not because of what it brings them but because that’s the kind of people they are.”
In the very same passages quoted in the post above, Pinker denies the idea that strict cost-benefit analysis is hard-wired. Total utilitarianism is actually the result of brain damage: “Other studies have shown that neurological patients who have blunted emotions because of damage to the frontal lobes become utilitarians: they think it makes perfect sense to throw the fat man off the bridge.”
I suspect Pinker is reacting to a lot of pure utilitarian game-theory scenarios, which are the true areas where utilitarianism is presumed to be the natural state of things. Though I know Pinker has spoken glibly of doing a cost-benefit analysis on whether a mother should commit infanticide, I suspect he would grant that an emotionally-skewed moral analysis has its virtues.
— Kevin Jones · Jan 15, 11:59 PM · #
Matt, I think you’re wrong about the meaning of that moralization quote. The description of what moralization is reflects standard work in the field of moral psychology concerning the difference between moral norms and prudential or conventional norms (like those of taste). That description isn’t particular to Pinker. It actually goes back to Elliot Turiel, who was doing research following in the Kohlbergian tradition. It has no connection to utilitarianism.
— Justin · Jan 16, 01:46 AM · #
Worse, runaway-trolley-type calculations tell us the relative moral utility of throwing someone’s grandma under the tracks without having anything to add about the objective moral worth of human beings. On that count, only a Hobbes-lite version of the pick-the-daisies argument is (weakly) implied. But we know full well that you can have a lot of wars without destroying the human species or making life utterly unlivable. And we have learned even more recently that you can have a significant amount of wars without making life uncomfortably bad even on our own more exacting contemporary terms.
This was supposed to post about 10 hrs ago, so, like, sorry.
— James · Jan 16, 03:50 AM · #
It won’t, however, help you choose among practicable moral systems.
That’s almost no constraint at all. It is very hard to imagine anything a Pinker-style inquiry could prove to be impossible for people to do. And if it’s not literally impossible, just very, very hard and costly to do, then moral philosophy still has all the freedom in the world to say: “You go bite that bullet. We don’t care how hard it is, it’s right anyway.’
In other words, Pinker-style inquiries provide no constraints at all on moral philosophies. Only other moral philosophies can do that.
— JD · Jan 18, 08:24 PM · #