Jim and Noah’s Excellent Adventure, Part 1
As always, Noah Millman has raised some excellent questions about what I’ve written. This post address the first part of his response, and I’ll try to address the second part in the next post.
Noah’s first question is whether, by the same logic applied to economics, I consider geology to be a science.
Let me start with the purpose of science as I see it. To quote Francis Bacon, from Novum Organum, the fountainhead of the scientific method: “the true and lawful goal of the sciences is none other than this: that human life be endowed with new discoveries and powers.” More specifically, the purpose of science is to create useful, reliable and non-obvious rules that allow us to predict the effects of potential interventions we might make on the physical world. Everything else – data collection, equations, laboratories, scholarly journals, tenure committees, all of it – are means to this end.
The method that science uses to do this resists formal algorithmic description, and varies somewhat by specialty and over time, but broadly includes hypothetico-deductive reasoning for building theories, and controlled experiments for testing them. The first third of my upcoming book is an attempt to describe this method, so I won’t try to do it here, beyond this very high-level description.
The fifth chapter of the book is titled “Science without Experiments.” In it, I distinguish between two kinds of fields that are generally considered science, but that do not rely much on experiments: (1) so-called historical sciences, such as geology or parts of evolutionary biology, for which most experiments are impossible in principle because these fields address only past events; and (2) fields which make forward predictions, but for which most experiments are infeasible, the principle example being the astronomical sciences. Without trying to recapitulate the whole argument here, my view is that to the extent that historical science contributes to the development of predictive rules that are subsequently tested and corroborated, it is scientific; to the extent that it does not, it is not. (Of course, we have to recognize that the route this contribution takes can be very long and circuitous.)
Noah goes on to say:
Economists can predict all kinds of boring things with great accuracy. Even things we’re interested in, they do well-enough at that people with money on the line – investors, business owners; not just politicians – rely on economic forecasts all the time. They aren’t perfect, but they are better than any available alternative. No, they can’t predict the things we’re really interested in – such as when the next recession will hit. But the geologists can’t predict the next earthquake either.
Here’s a very short excerpt from the book on this question (using the analogy to weather forecasting, rather than earthquake forecasting):
“GDP will likely grow by 1.3% next quarter” kinds of forecasts that are better than naive forecasts (what meteorologists term forecasts with “skill”) are useful to have. Check. They give us material information about the near-term background against which our interventions will be executed. Further, it is often the “scientific-seeming” aspects of meteorology (lots more monitoring stations, satellites and super-computers, that collectively create a much more granular finite element model) and economics (analogously, larger data sets and more megaflops) that allow increasing skill. Check.[N]ot much that is practically important about the overall development of the economy is predictable in the long-term. Short-term forecasts of complex systems can sometimes be made on a combination of “momentum” and the assumption of relatively simple set of causal mechanisms. Five-day weather forecasts can be made, for example, by more complex versions of the observation that the weather in Ohio today is highly correlated with the weather in Virginia a couple of days later because of consistent wind patterns.
But because they lack a robust understanding of the extraordinarily dense causal network that governs the system in question (using “causal” here in the operational sense of the ability to make accurate predictions about the difference between the world in which we execute an intervention as compared to the counterfactual world where we do not), they do not allow us to make reliable, non-obvious predictions about the result of proposed actions. This is closely related to my fundamental criticisms of economics and other social sciences – not that they can’t do this at all, but they have extremely limited ability to do so, and many social scientists (or often, social science popularizers) make wildly excessive claims about this capability. We ought, in my view, to premise our political economy around an accurate understanding of our ignorance.
At a certain level, it seems to me to be an enormously obvious case of the emperor’s new clothes that when you go from a discussion at an abstract level like this to a very practical level – “OK, please state the rules developed by economics that can actually make useful, reliable and non-obvious predictions about the result of alternative proposed courses of government action on the issues of the day” – you get a pretty underwhelming result: non-falsifiable statements, which 10% – 20% of the practitioners in the field dispute. There are obviously debates in frontier areas of all science, but at the level of things that can be said in a freshman textbook, there is also a body of agreed-upon causal laws that, crucially, have been converted to a body of engineering praxis that can be used to solve real problems. In all of the back-and-forth with economics bloggers on this question, I haven’t yet been confronted with anything approaching F=MA plus a long, long series of experiments showing that this is an excellent engineering approximation at terrestrial scale and speed that is the basis for the following machines that will produce X effect based on Y intervention, or a list of approved drugs, derived from the following work in biology labs, that have shown in clinical trials that when you introduce the following chemical into the bloodstream of a human with the following measurable indicators of a disease state, it will change those indicators with the following distribution of change. When it comes to economics, where’s the beef?
Rather than philosophize about what economists can or can’t do, let’s focus on what economists and social scientists can but won’t do for ideological and careerist reasons.
I’ve been a fan of the social sciences for 39 years. The social sciences have come up with lots of replicable findings of vast importance to public policy. For example, it’s obviously extremely important for predicting the effects of immigration policy to know that illegal immigrants tend to be of below average IQ, have below average IQ children, and make below average efforts to educated themselves or their children, as one look at California’s NAEP scores would show.
Yet, how many social scientists have publicly sounded this warning over the last generation?
— Steve Sailer · Mar 19, 09:44 PM · #
Consider a sports analogy. Jim is complaining that the experts can’t predict who will win the NCAA basketball tournament. I’m complaining that the experts noticed a long time ago that NCAA is always won by … tall guys, but they don’t like to mention that fact to the people deciding whom to recruit to play basketball.
— Steve Sailer · Mar 19, 09:48 PM · #
Steve,
I don’t think that’s the point exactly.
I’ve always tried to be careful to say that what economists (and by extension, social scientists in general) have a very hard time doing is specifying useful, reliable, non-obvious interventions. In particular, those proposed interventions that have not been subjected to replicated randomized trials.
Take your examples:
I’ve never challenged the ability (or value) of social scientists collecting and reporting data.
Or:
Same thing, I think.
Best,
Jim
— Jim Manzi · Mar 20, 10:47 AM · #
You discuss the second part quite cogently. But the first part is equally or even more important.
Returning to Keen’s takedown of neoclassical economic theory: he demonstrates that neoclassical theory gets a “fail” for the first task, because it depends on assumptions that are false on their face, and are contradicted by the terms and methods of neoclassical theory itself.
Those assumptions were added to the theory to resolve its own self-contradictions, contradictions pointed out by neoclassical economists themselves, but subsequently assumed away. (The economist on the desert island with a case of canned beans: “Assume we have a can opener.”)
This post facto rationalizing gives good support for Keen’s statement:
pithlord jumps straight to that myth: that a theory’s assumptions need not be valid, as long as it predicts successfully. Keen devotes a chapter to addressing that Friedmanism, and does so quite tellingly.
As Keen points out, Friedman actually went even further in his assertion. He said “the more significant the theory, the more unrealistic the assumptions.” He seems to be asserting that really big, important, far-reaching theories ignore many small factors — as they must.
1. Citing Musgrave, Keen points out that Friedman doesn’t distinguish between “negligibility” and “domain” assumptions.
Of course theories assume that certain (almost all) factors have negligible effects, and ignore those factors. Theories are simplifications; that’s a tautology.
Domain assumptions define the conditions under which a theory is applicable. If the theory’s assumptions are not true in the domain, the theory is false — inapplicable to that domain. Neoclassical theory was forced to adopt assumptions to avoid self-contradiction that are not in fact true in the domain it seeks to explain.
2. Economists in fact care very, very much about assumptions. If an argument does not accept their cobbled-together, rear-guard defensive assumptions a priori, they do not deem it worthy of consideration or discussion. Their discussion space is hermeneutically sealed against any fundamental logical or empirical challenge.
They can’t have it both ways. Either (domain) assumptions matter, or they don’t.
“Pathological”? Seems apt. It also seems a quite reasonable explanation (perhaps among many) for your apt assertion that economics —at least as currently constructed — is often/largely/always useless for long-term prediction.
“Assume the earth is hollow…”
This is before we even get to empirics. The theory is resting on quicksand.
— Steve Roth · Mar 20, 04:23 PM · #
First, other disciplines are having this discussion too.
Second, the various roles that people play is important in deciding when science is engaged or not.
http://ageofintuition.blogspot.com/2011/03/anthropology-economics-practitioners.html
— L Moore · Mar 20, 08:34 PM · #
Jim says:
“I’ve always tried to be careful to say that what economists (and by extension, social scientists in general) have a very hard time doing is specifying useful, reliable, non-obvious interventions.”
Perhaps your point is that immigration policy, while useful and reliable, is too darn obvious for social scientists to specify!
— Steve Sailer · Mar 20, 10:59 PM · #
To broaden the discussion, Jim’s argument would be inarguable if phrased like this: Social scientists tend to have trouble specifying additional social engineering interventions that are useful and reliable. In contrast, however, they have collected enormous amounts of evidence that selection tends to be quite effective.
For example, whatever ideology they espouse in public, all leading American social science departments are highly selective in private in whom they admit to their ranks as their Ph.D. students.
— Steve Sailer · Mar 20, 11:45 PM · #
Steve,
I suspect that by “social engineering interventions” you mean something very much like what I mean by “interventions.” Assuming that’s right, I obviously agree with your first sentence.
I’m not sure exactly what you mean by selection being “effective.” If you mean that psychometric testing can identify people with higher expected performance on both other tests and relevant lifetime outcomes with (much) better-than-random accuracy, I agree. Further, I think it is purely ignoring evidence to disagree.
In other words, if I understand what you mean, I don’t see the contradiction with what I’ve said.
Best,
Jim
— Jim Manzi · Mar 21, 10:41 AM · #
Jim:
I’m eager to read your book when it comes out. Thoroughgoing epistemic humility sounds like a wonderful thing in theory, but I have kind of a hard time seeing how it would be successfully applied in practice. Which is why I’m eager to read your book!
Best,
-Noah
— Noah Millman · Mar 21, 11:54 AM · #
Jim
The F=MA of economics is “Demand curves slope downards”
The observation that pushing on something will make it go faster is fairly straight faster, the wide applicability of F=MA is not obvious. Similarly, while its clear to most people that it is easier to sell a cheap thing than an expensive one, the applicability of the Law of Demand as it is sometimes called, its wide applicability is not obvious. As can be seen from people and politicians trying to do things which fly in the face of downwardly sloping demand.
What you are talking about is modelling. Which even in the most simple systems is vastly more complex than simply trying to tease out core laws.
Do we have a good three-body model in astrophysics. Can we even model molecular interactions at a complex level given quantum mechanics.
I believe determining how a single protein will fold is still the holy grail of molecular biology.
This is roughly the category of things that you seem to be criticizing in economics. Our ability to model the actual world in which people live. Is there a science that does this?
As I mentioned before, when we get to build the lab we do quite well. Financial exchanges are as closely to you get to a frictionless vaccum in economics and computers programmed these environment hem very closely to the predictions of economics.
— Kasl Smith · Mar 21, 01:35 PM · #
I’ve followed this line of argument for, oh, the past 1-1/2 or 2 years or so, and I’ve found it fascinating the entire time. I look forward to reading the book version. So, thanks for that!
However, I’m worried about how much you’re relying on some very outdated philosophy of science. To work backwards: (1) Engineering (including pharmaceutical research) isn’t the application of high-level physical laws (like F=ma) to given problems, in any straightforward sense of `application’. It’s much more cobbling together solutions, prototypes, and models out of previously-successful solutions, low-level observations about site conditions (in building a bridge, the local hydrology and geology, for example), and quasi-empirical intermediate-level generalizations (like the way force is `distributed’ along a structure made out of steel). The Industrial Revolution was not built on F=ma.
(2) The latest version of the argument seems to move freely between falsification (this evidence undermines the theory), confirmation (this evidence supports the theory), and explanation/understanding (the theory explains why we got this evidence). Sometimes you argue that economics can’t be falsified; sometimes you argue that economic predictions haven’t been confirmed; and sometimes you argue that economics, because it relies on simplifying assumptions, doesn’t actually explain the phenomena it’s supposed to explain. But these don’t always go together. Climatology relies on families of complex models which, because they’re complex, can’t be strictly falsified. But since (and to the extent that) the models agree with each other and historical data, their claims have been somewhat confirmed. Some of these models are explanatory (enable us to understand why various aspects of global warming are interrelated) but some of them aren’t (catastrophic changes, in the technical sense, often just suddenly appear in the simulations, for reasons we don’t understand).
(3) Similarly, science — and I have in mind fields like physics and chemistry now — can only be said to use hypothetico-deductive reasoning if you’re implicitly including a `well, that didn’t work, let’s go back and tinker with the parameters’ step. You throw in a nod in the direction of the fact that science doesn’t follow any nice, neat algorithm, but you don’t consider whether the messiness and complexity of the highly social human activity of theory development means your own assumptions are oversimplified and empirically inadequate.
— Dan Hicks · Mar 21, 01:40 PM · #
Dan,
Thanks. I devote an entire chapter to the notion of science as a social enterprise.
When I use “confirmation” or “validation” in this kind of an informal blog post context, what I mean is “withstanding repeated, rigorous falsification trials”.
I am using H-D in contrast to classic (Baconian) induction. That is, anything is fair game for defining a theory.
In the book, I try to unify paradigms and falsfication into a structure for thinking about how science draws conclusions (though I think this is original, the basic concept is not enormously different than what Lakatos put forward). I wasn’t trying to make original contributions to the philosophy of science, just have some rigor and definition of terms before getting to the discussion of social science knowledge and findings.
Best,
Jim
— Jim Manzi · Mar 21, 05:39 PM · #