I Don’t Think These Count
Economist Karl Smith and I have been going back forth about the nature of the assertions made by economics. In his most recent post on the subject, Smith says this:
I see in one of Jim Manzi old posts he is asking for anyone to come up with 14 non-obvious empirically verified and useful rules made by economists.
He then takes up the challenge by listing 14 propositions that he believes qualify.
The actual challenge in my post was this:
My challenge would be simple: please list 14 useful, non-obvious predictive rules that economics provides that have survived rigorous, replicated falsification trials. [Bold added]
Smith was presumably short-handing, which is fine, but it’s important to keep in mind the bolded terms. A predictive rule has the form “IF observable event X occurs, THEN observable event Y will follow.” Surviving rigorous, replicated falsification trials, means that the predictive rule has been tested by independent investigators attempting to disprove it, by using it to make difficult, measurable, real-world predictions, and seeing if it passes all of them.
A number of the rules that Smith proposes don’t seem to me to be, or map unambiguously to, a predictive rule of the form IF X, THEN Y. In this post, I’ll just focus on some of those that seem to be asserted predictive rules. I think there is a consistent problem with most of them: they are not falsifiable.
Consider Smith’s Rule 5: “An increase in the mass of citizenry will not lead to an increase in the proportional mass of the unemployed.”
What about the fact that between 2000 and 2007 the number of U.S. citizens rose, and the proportion of unemployed subsequently increased?
Of course, you will say: “That’s idiotic. The increase in the mass of the citizenry didn’t lead to the increase in the proportional mass of the unemployed. It was a coincidence.” But this counter-argument begs the crucial question: how do you know that?
I’ve picked an extreme example to make a methodological point. We all know that there was a huge economic crisis at the same time that probably had an enormous role to play in rising unemployment starting in 2008. OK, so should the corrected rule say that “An increase in the mass of citizenry will not be followed by an increase in the proportional mass of the unemployed, unless there is also an economic crisis of the following dimensions”? I don’t think that works, because I can come up so easily with another counter-example that did not occur during a financial crisis, which would then require some other exception. And so on. These aren’t isolated incidents; there are thousands of examples, of varying duration and geographical extent, of population increases followed by unemployment increases. Either you are able to generalize these exceptions into some (potentially probabilistic) rule that can be stated in reasonably compact form, or you’re writing a history book.
If you have built such a rule, then it can be tested; but you don’t get to rescue the rule after it fails a test by stating some new exception, and claiming that the rule is still “basically right.”
Rules like this, that embed terms like “leads to” or “causes,” typically will not really be falsifiable, because they beg the question of causality that falsification testing is meant to establish provisionally.
Consider, in the same light, Smith’s Rule 6: “The total flow of services available to the community cannot, in general, be increased by destroying some stock of assets. I.E. one cannot raise general living standards by breaking glass to give work to the glass maker.”
What about Western Europe between 1939 and 1970? Following the massive destruction of its stock of assets in WWII, Europe rebuilt to provide much greater flow of services to the community. The same problem exists for the counter-argument that “but for the destruction of WII, Europe would have been even richer in 1970.” How do you know that? The rule embeds the assumption of the answer in the form of “be increased by,” so counter-examples can’t disprove the rule.
Or consider Smith’s Rule 9: “Increasing the supply of medicine and vaccines to a preindustrial society will cause living standards to fall.”
What about Europe from before to after the industrial revolution? The supply of medicine and vaccines was increased, and living standards subsequently rose enormously. Same problem again: the rule simply assumes “cause,” and this precludes falsification.
(Cross-posted to The Corner)
Jim:
I’m puzzled by your objection. Let’s take Smiths rule #5. The actual rule (assuming it exists) is a probabilistic rule. What Smith has done is present the inference about causality that economists have drawn from the empirical data, and you’re objecting that this statement about causality cannot directly be observed. But scientists make inferences like these all the time – even physicists. Why can’t economists?
Smith’s proposition #5 is: “An increase in the mass of citizenry will not lead to an increase in the proportional mass of the unemployed.”
This is a causal inference from an empirical proposition that might actually be tested, such as:
“There is no correlation between an increase in population and an increase in the unemployment rate.”
Or, even better:
“You might think that an increase in the workforce predictably leads to a proportional increase in the number of unemployed, but the data do not confirm this; rather, the average increase in unemployment following an increase in the workforce is substantially less than proportional, if that average is a positive number at all.”
My restatement of his proposition is something that could straightforwardly be tested via historical data. Take thousands of periods of measured labor force growth and look at what happens to the unemployment rate. If what you get is a bell curve centered around zero, you’ve got evidence for my proposition. A reasonable inference from the above is that growth in the labor force does not cause the unemployment rate to increase, because if it did you’d expect to see a bell curve centered above zero – and, indeed, you’d expect to see some correlation between the magnitude of the increase in the labor force and the magnitude increase in the unemployment rate. That inference is what Smith cited as a predictive rule.
What is wrong with this reasoning? In what way is it unscientific?
Let me be clear: I don’t know if the data supports my proposition. All I’m saying is: the data exist, and can be tested against the proposition, and if the data confirm the proposition, then Smith’s proposition is a reasonable and proper inference for a scientist to make.
— Noah Millman · Mar 22, 02:06 PM · #
A lot of the econ rules require a time qualifier, typically “in the short run,” “in the medium run” or “in the long run.”
I guess I will agree with Jim that those aren’t particularly scientific terms. (It’s also impossible to get clean data for the medium or the long run).
— J Mann · Mar 22, 02:10 PM · #
BTW, I increasingly feel like our dispute is a variation on a dispute between a species of positivist (you) and a species of pragmatist (me).
You epistemological humility sounds an awful lot to me like the positivist project. And I just don’t have much use for that project. I don’t think knowledge advances primarily by starting with a set of rigorously known knowns and slowly, painstakingly adding to that stock. I think all our knowledge is uncertain and the advance of knowledge is really the reduction of that uncertainty. And uncertainty can be reduced very substantially before you ever get to the point where you have strong predictive rules.
— Noah Millman · Mar 22, 02:15 PM · #
As a total aside, your response to Smith’s rule #6 points to an area very worthy of study by economists – assuming you think there’s anything economists can learn about anything.
The proposition as stated sounds obviously intuitive: you can’t increase living standards by destroying valuable assets. Duh.
But what if a great deal of labor is devoted to unproductive activity trying to preserve the value of assets that are depreciating?
Think of a major city of a certain age. Substantial economic costs are born by residents and businesses struggling with an outdated and inadequate infrastructure. But updating that infrastructure would also impose enormous economic costs on the residents and businesses. One might actually do a calculation proving that destroying the city and rebuilding it would add more value to the citizenry than continuing to update and maintain the city as it is. But there is no way to actually do that transaction – you could not, practically, get agreement from the mass of citizens to vacate the city for three years to allow you to bulldoze the place and start from scratch. And so, you continue to bleed value until an earthquake does the job of destroying the city for you, and releases all that labor for more productive activity.
I don’t know whether there’s a testable proposition there. If there is, it would look something like “the economic cost associated with a natural disaster increases less than linearly with the scale of the disaster” or “the economic cost associated with a natural disaster is inversely proportional to the age of the infrastructure destroyed by the disaster” or something like that. I don’t know if you’d have enough data to find a testable proposition, but you might. And, if you did, I think you could make some useful inferences therefrom that would help you make more accurate estimates of GDP in the wake of a natural disaster. Which is what this ultimately boils down to, right?
— Noah Millman · Mar 22, 02:28 PM · #
Noah,
I hear on there might be a version of that rule that could be stated probabilistically. That’s just not the one he put forward.
f one were to address a weaker probabilistic statement, it would have to be quantified, and then could be tested (recognizing that this creates further complications, as no probabilistic statement can ever be strictly falsified).
I’m really not on a positivist project – quite the contrary. Looking back at the book as I’ve been editing, I’d say it’s more like trying to be rigorous about what it means to be pragmatic – not in the sense of “better”, i.e, “more like positivist”, knowledge, but rather in terms of really trying to think through, as best I can, how to make decisions in the face of uncertainty.
I think that there is a deep tension between scientific knowledge and implicit knowledge developed through evolutionary processes that allow cumulative progress when we don;t have scientific knowledge, and in fact, are the basis of human decision-making that scientific knowledge sits on top of, or allows us to short-cut, or improve upon. The specific sub-element that I’ve been going back and forth on in this case, is my attack on what Hayek called “the pretense of knowledge.” All this is just a set of (important, IMO) preliminaries before getting to the “so what do we do”.
I agree with you about the research project. I ordered these three examples form least-to-most plausible objections. In the Mancur Olson sense, I think the ideas that destroying a stock of physical assets can break down exiting power structures, etc., and in certain cases actually accelerate growth, is at least plausible.
Best,
Jim
— Jim Manzi · Mar 22, 03:01 PM · #
Jim
In rule #5 I was thinking of the Irish Potato Famine which I took to be an exogenous shock. As well as the Mariel Boatlift, which likewise was a quasi-experiment. Of course given that population is always increasing and sometimes unemployment is increasing there will be times when the two go together. However, if you do something that explicitly increases the population you do not get an increase in the unemployment rate.
It seems to me that the things I mentioned are just as good as “falling bodies accelerate towards the ground irrespective of weight” which is true in a vacuum but not generally what one will observe.
Isn’t “having survived falsification trials” setting the bar a bit unfairly. It isn’t that we failed the trials its that no one is going to let us do them. And, believe me we are cold-blooded enough to do it, if we could get away with it. If we survive repeated natural experiment shouldn’t that count.
Additionally, I readily admit that we do not have an “Engineering Praxis” on nearly the scale as physics. Though I would suggest that Finance and Insurance are comparable in scale to Pharma. However, I think this is related in large part to the ability to capture the gains from developing such a praxis.
For example, if there was Competitive For-Profit government we might see Policy Engineering blossoming on the scale as tech.
Anyway, a lot of people seem to be interested in this. I suspect we are not that that far apart in our ideas, so maybe this would be ripe for some type of recorded dialog.
— Karl Smith · Mar 22, 03:40 PM · #
Karl,
I also suspect that we’re a lot close than it seems on this.
I’d welcome the opportunity for a dialogue.
Best,
Jim
— Jim Manzi · Mar 22, 07:03 PM · #
Rules like this, that embed terms like “leads to” or “causes,” typically will not really be falsifiable, because they beg the question of causality that falsification testing is meant to establish provisionally.
Physics has the same problem. Consider the large hadron collider, which is designed to reveal the Higgs boson if it exists. We can think of this as a causal claim, something like this: If the Higgs boson exists and the LHC is working correctly, the Higgs boson will show up in the cloud chamber at the end.
The Higgs boson hasn’t shown up yet. Does this mean that the Higgs boson hypothesis has been falsified? No; some physicists think they didn’t get the parameters exactly right or that there might be some sort of misdesign in the LHC. Strictly speaking, the claim that the Higgs boson exists (or that it would cause this sort of pattern to show up in the cloud chamber) can never be falsified, because it can never be tested in isolation from background conditions.
More generally, causal claims almost always have to include a `ceteris paribus’ or `shielding’ clause: if C happens and nothing interferes (or, things work the way they’re supposed to), then E happens. This is the case in physics and chemistry as much as in economics. And it means that causal claims can never be strictly falsified, since the alternative that something-we-don’t-know-what interfered is always on the table.
Maybe this is just the point you’re trying to make, and you have some positive proposal of what to do in the face of this sort of problem. In that case, giving your readers a hint of that proposal might help mollify the critics.
— Dan Hicks · Mar 23, 12:37 PM · #
Jim:
Assuming I understand you, you’re arguing something like:
What we “know” scientifically is really, really limited. Like, limited to almost nothing. Therefore, we are tempted to behave as if we know more than we do – to give the imprimatur of scientific knowledge to all sorts of things that don’t deserve the label. And we devalue implicit knowledge that is the result not of conscious and controlled experimentation but unconscious evolved systems of behavior. We should adjust our decision-making accordingly, giving more weight to this kind of implicit knowledge and less weight to anything that calls itself science.
I’m eager to read your argument in full when it is published, but my instinct is to disagree with this, for a few reasons.
First: what we are doing when we rely exclusively on implicit knowledge isn’t really what we mean by decision-making at all. Habitual behavior is what we engage in when we are not making decisions; decisions are what we have to make when we come to a crux, when our regular habits of behavior do not provide us the answer to the question, “what do we do now?” A modern woman must decide whether and if so whom to marry. A woman in a traditional society does not make such a choice: she marries the man her father has selected. How does he make this decision? He is also probably highly constrained: confined to eligible males of the right social station with the right degree of consanguinity (not too much, not too little). His heuristic will be very simple not because he knows more than the modern woman (being in touch with this implicit knowledge) but because his choice has been arbitrarily and severely limited – and hers even more so. The “knowledge” embedded in this social structure is, basically, the knowledge of how to create a social structure unlikely to experience change over long periods of time. But if that’s not your goal, then what’s the use in relying on this implicit knowledge?
Second: uncertainty is not a constant. It feels to me as if, by your definition, there is nothing we “know” in a scientific sense. I don’t even have to talk about the probabilistic nature of reality itself; I can just point out that every causal rule we have is an induction, and that induction itself has no solid justification. But that doesn’t mean that nothing we think we know has proven useful. You like pointing out the example of building an airplane as something that demonstrates that we have genuine scientific knowledge. If you follow this plan, and build an airplane according to these specifications, it will stay up. But it won’t, necessarily. Airplanes crash. The rule “if you build an airplane this way, it will fly” has literally been falsified, because all systems sometimes fail. Does that mean we don’t actually “know” how to build an airplane? Moreover, in the early days of experiments with flight, failure was general. At what point did we cross from “not knowing” how to build an airplane to “knowing”? If there is no sharp line, and if you grant that we would never figure out how to build an airplane by relying primarily on implicit knowledge, then how do you account for the fact that we do build airplanes, and that they generally stay up?
Third: I don’t really see any deep tension between scientific knowledge and implicit knowledge. Implicit knowledge may be compared to the “knowledge” an ant colony has of how to build an anthill. This implicit knowledge is an object of study for entomologists. Similarly, the implicit knowledge embedded in human behavior is the proper object of study for the human sciences: anthropology, sociology – and economics. The market is an evolved system – something that evolves within a framework of consciously-created rules, but not reducible to those rules, and not subject to central, conscious design at all. But it can be studied, and rules discovered as the result of this study. And we will do a better job of discovering these rules if we approach the matter scientifically. Which is what, I would argue, economists are actually doing. If they do their job well, the reduce the scope of uncertainty – never to zero, of course, but reduce it nonetheless. And that is an enormous service to decision-making, because in a situation of total uncertainty decision-making is impossible.
But, as I say, I’m eager to read your book and see what kind of argument you make therein.
— Noah Millman · Mar 23, 01:45 PM · #
Yeah, not only can planes fail, but going even deeper, newtonian physics is still objectively false. Yet we still can make planes fly based on false knowledge of the universe. That’s the problem I tend to see with Manzi. He makes some pretty solid claims about truth and knowledge and causality… but has zero ability to grant these claims the logical rigor they need to stand on their own. He’s trying to articulate a contradictory form of scientific method that provides answers in some instances and doesn’t in others.
— Console · Mar 24, 07:22 PM · #
Dan / Noah / Console:
I’ve been trying to figure out how to respond to these excellent without excerpting about 40,000 words form the book. Here’s the best I can do. Science is philosophically sloppy. This is a feature, not a bug. A huge insight was that avoiding a set of philosophical questions was essential to its ability to create practical progress. This sloppiness works to its advantage in environments of lower causal density and holistic integration (terms I define in the book), but become more problematic as environments become more causally dense and holistically integrated (a combination I term integrated complexity). Human society is an environment of extreme integrated complexity. This implies great care in applying scientific techniques in these areas, and more limited ultimate applicability.
Best,
Jim
— Jim Manzi · Mar 24, 07:40 PM · #
The “broken pane” argument is true if and only if the butcher had alternative uses for his money. If he is hoarding it, then the broken will lead to new investment (for a window pane); if he was going to use the money to expand his business, then the broken pane reduces what he could have spent.
— John Médaille · Mar 25, 05:11 PM · #
If he is hoarding it, then the broken will lead to new investment (for a window pane)
New investment now at the expense of new investment later. Which is not necessarily of no value, considering the time value of money.
— The Reticulator · Mar 26, 06:18 PM · #
Noah’s point is that if the broken window causes the butcher to move to a more profitable location, or to open a successful sidewalk cafe in front of his shop, then it may result in economic growth. (Presumably by freeing the broker from the “sunk costs fallacy.”) If it merely causes the broker to replace the window, or to move to a location that he believes to be more profitable, but turns out to be worse, then its a loser.
This is more or less the theory of why productivity is increasing as a result of the recession — lowered demand causes firms to reorganize and shed their lowest contibuting workers.
— J Mann · Mar 29, 03:54 PM · #
Jim, “beg the question” does not mean “raise the question”. See http://begthequestion.info/
— andrew · Mar 29, 06:11 PM · #
If the butcher is going to move to a more profitable location because the window is broken, that doesn’t mean that the best way to get him to move is to break the window. Probably it’s better to give give the butcher some other sort of nudge, and let the baker take over the shop with the window intact.
— andrew · Apr 1, 07:30 PM · #