Peter’s criticism of that Steven Pinker essay’s politically facile conclusion is right on, but I think he’s too generous to Pinker’s entire vision of what morality is and what psychology tells us about it.
According to Pinker, evidence suggests that we are hard-wired to behave like utilitarians, complete with impulses toward altruism and maximizing mutual benefit. What corrupts and confuses this universal moral sense, he argues, is “moralization:”
Moralization is a psychological state that can be turned on and off like a switch, and when it is on, a distinctive mind-set commandeers our thinking. This is the mind-set that makes us deem actions immoral (“killing is wrong”), rather than merely disagreeable (“I hate brussels sprouts”), unfashionable (“bell-bottoms are out”) or imprudent (“don’t scratch mosquito bites”).
For Pinker, then, every question that the rest of us might consider a “moral” one is actually a cost-benefit calculation that has been skewed by emotion. For instance, in the old thought experiment involving a runaway trolley car, subjects show an “irrational” aversion to physically interposing a human in order to save several, while throwing a switch that will have the same result is much less emotionally fraught. The squeamishness we might feel about wrestling an unwilling subject into the path of an oncoming trolley is the result of our “moralizing” what is a straightforward calculation, and “our nonutilitarian intuitions come from the victory of an emotional impulse over a cost-benefit analysis.” Once we transcend our vestigial “moralizing” sensibilities, Pinker suggests, our innate and universal moral sense will kick in, and we’ll make decisions that are aligned with Pinker’s proposed objective moral order.
I suppose from Peter’s response that not everyone is as creeped out as I am by such naked utilitarian thinking. So be it. But even if your defining moral principle is the greatest good for the greatest number, shouldn’t you at least recognize and address the absence of perfect information?
Those primal aversions and injunctions that lead us to “moralize” our problems are specifically intended to deal with our own ignorance and fallibility. The Pinker-brain can cook up cost-benefit calculations all day long, but unless the input data is perfect, the consequences will be flawed. Even an evolutionary reductionist could build a model in which “irrational” impulses compensate for uncertainty, the same way altruism fosters cooperation and positive-sum outcomes.
If people could calculate utility as efficiently as Pinker suggests they can, there’s be no need for morality. We’d know exactly whom to crush with our trolley cars in exchange for whom. Meanwhile, in this fallible (and yes, “fallen”) world, some of those “irrational” impulses still come in handy.