Popper is my homeboy: a manifesto
Will Wilkinson has an amusing series of posts demonstrating increasing frustration with the macroeconomic arguments about the stimulus proposal. In one, he sums up the source of his frustration:
When I see Delong more or less indiscriminately trashing everyone at Chicago, or Krugman trashing Barro, etc., what doesn’t arise in my mind is a sense that some of these guys really know what they’re talking about while some of them are idiots. What arises in my mind is the strong suspicion that economic theory, as it is practiced and taught at the world’s leading institutions, is so far from consensus on certain fundamental questions that it is basically useless for adjudicating many profoundly important debates about economic policy. One implication of this is that it is wrong to extend to economists who advise policymakers, or become policymakes themselves, the respect we rightly extend to the practitioners of mature sciences. There is a reason extremely smart economists are out there playing reputation games instead of trying to settle the matter by doing better science. The reason is that, on the questions that are provoking intramural trashtalk, there is no science.
This is just about perfectly stated.
I would state Will’s implicit working definition of science for the purpose of this discussion as “an intellectual discipline that produces useful, non-obvious and reliable prediction rules”. Or at least, that’s mine, and it’s consistent with Will’s statement. Note that this doesn’t let economists, political scientists or others off the hook by saying they want to “avoid physics envy” or whatever. To say that they are practicing non-science by this definition is to say that their theorizing produces decision rules are at least one of: useless, obvious or unreliable.
In fact, you can see debates in mature sciences that sound a lot like the one that Will describes; they just tend to be around frontier issues. Consider the physics of wings for airplanes. There is a reasonably stable body of findings that can be (and has been) translated into engineering practice that works. Airplanes stay up. Giant tubes of metal with comparatively tiny lift surfaces go up in the sky, travel thousands of miles at about the speed of sound and land safely every day (sometimes, with sufficient pilot expertise, on the Hudson River). That’s about as useful, non-obvious and reliable as anything I see around me. This is the ten tons that sits on one side of the scale whenever somebody wants to get into an argument about whether we “really know” this physics. This is what we lack in most parts of economics, and certainly in the kind of economics that is being shouted about in the stimulus debate.
What is the key methodological feature that distinguishes science from the kinds of economic debates that frustrate Will: experiments. Properly controlled experiments end debates (in addition, of course, to starting new ones).
I spent about the first ten years of my career executing increasingly sophisticated quantitative analyses that used data to try to evaluate and predict the success of business initiatives in order to develop corporate strategies. Eventually, I saw that these analyses led to the same kind of scholastic debates as we see among macroeconomists. The root issue was that it was impossible to find a methodology that could reliably distinguish correlation from causality. Only through exhaustion of all possible alternatives did I come to find that experiments that randomly assign units of analysis (customers, stores, sales territories, etc.) to test and control groups are the only reliable method for determining causality.
Once I figured this out, I became so fixated on it that I started what has now become a pretty good-sized software company, named Applied Predictive Technologies (APT). APT’s tools automate the design and interpretation of experiments for a good chunk of the Global 2000. We have generally tried to stay below the radar, but that’s impossible now, as the Harvard Business Review did an article in the current issue (How to Design Smart Business Experiments) that is mostly about what we have done at APT to make experimental learning a reliable business function. So I can come out of the closet on it a little bit.
Once you have this insight about using experiments to determine causality, it seems like it might be pretty straightforward to apply it, but most relevant business experiments are not trivial to design and interpret. It’s beyond the scope of the HBR article, which is pitched to senior general managers, but there were some fundamental analytical issues that we had to address to make this approach work in practice. It took years of work by scores of some of the most talented mathematics, software engineering and business analysis professionals in the country, plus field iteration with dozens of the world’s largest corporations running thousands of real experiments, to (partially) solve them.
This approach, once correctly installed, changes how many types of business decisions are made. Some issues are not practically testable. For example, a specific program may be non-replicable; the decision may have to be made faster than a test could be conducted, and so on. But if a program is practically testable and an experiment is cost-justified (i.e., if the expected value of the incremental information is worth the cost of the test), experimentation dominates all other methods of evaluation and prediction.
This Baconian revolution is coming to economics and social science.
In fact, it’s already happening. Weirdo experimental economists are starting to win the Nobel Prize. The recent Economist magazine round-up of the 10 most promising young economists in the world is rife with it. Established economists working in the current paradigm, as always, either dismiss it, or imagine that it is a niche sub-field that won’t affect them. Time will tell, but I think they’re entirely wrong.
Much of the work that we now think of as economics, political science and other social sciences will likely be displaced by some hybrid of biology, experimental economics, psychology and other fields that can evaluate hypotheses for the quantified prediction of human behavior through structured falsification tests (or, sometimes, true “natural experiments” in which non-intentional random assignment has occurred). As I’ve gone into in a recent post on possible interpretations of a specific clinical trial in Ghana, the big constraint on the practical utility of this science will likely be the problem of generalization from experimental results to forward predictions. Even in its current embryonic form, experimental economics already suffers from excessive rhetorical generalization from what some specific group of college sophomores did with $30 to fairly grand statements about human nature. But, as with business experimentation, where applicable, this new approach will dominate what we now think of as classical economics.
This will likely, at least for a long time, not address a lot of territory now covered in economics, including, for example, many of the issues related to the stimulus debates. These kinds of topics will of course remain interesting, and work will still be done on them in an academic setting; it will simply be even more obviously non-science, and be done down the hall in the history, philosophy and literature departments. Where it belongs.
Nullius in Verba.
(cross-posted at Atlantic Business)
Much of the work that we now think of as economics, political science and other social sciences will likely be displaced by some hybrid of biology, experimental economics, psychology and other fields that can evaluate hypotheses for the quantified prediction of human behavior through structured falsification tests (or, sometimes, true “natural experiments” in which non-intentional random assignment has occurred).
I think this is folly, and for precisely the reasons you suggest.
— Freddie · Feb 3, 06:47 AM · #
As someone who believes that human nature in subjects is both not self-constituted but also is constituting, and is performed using an agent’s habitus (to use Bourdieu’s phrase), I’m really skeptical that these kinds of experiments won’t tell us anything that isn’t highly contingent on the circumstances of their fields.
To ask a different way, do you find that a lot of the buisness experiments you perform are generalizable, especially across time, space and cultural capital? You allude to this, but I’m curious how much reflect underlining structures and how much depends greatly on incredibly specific fields.
A lot of the experimental economics, especially non-development kind, is catching up to where sociology has already been, except it has that physics-glow to it (and more funding). But contra experiments, I wonder if the real future is with ethnographic research into economics. I think Venkatesh’s “Off The Books” sociology ethnography is the best microeconomic, game theory and decisions under uncertainity book I’ve come across – it blows the roof off of a “group of college sophomores did with $30” (a brilliant description) style research (at Illinois they put kids under MRIs and do those trading games. The conclusions they draw are not meek.)
— Rortybomb · Feb 3, 07:19 AM · #
Ultra-realistic large scale lab experiments in the real world have been carried out by market researchers for almost 30 years that make typical experiments in behavioral economics look like a campfire skit in comparison.
From 1982-1985, I managed BehaviorScan test markets for Information Resources Inc. We’d take, say, 5,000 volunteer panelist households in Pittsfield, MA and Marion, IN, and divide them into test and control groups who had purchased equal amounts of Crest toothpaste in the previous year. Then we’d show the test group of 2500 households twice as many Crest ads in their homes over the next year by manipulating the local cable TV system at the individual household level so that the test group saw Crest ads on their televisions while their neighbors in the control group saw public service ads. We’d then record how much Crest and other toothpaste they bought at local supermarkets during the test and see if doubling advertising paid off. IRI had bought checkout scanners for all the supermarkets in town in return for sharing with us data from our volunteer panels.
This remarkably audacious business, invented by John Malec, Gerry Eskin, and others, was highly profitable in the 1980s. My main client, Procter & Gamble, ran about 50 BehaviorScan tests in those years, paying IRI $100,000 to $300,000 each.
— Steve Sailer · Feb 3, 08:19 AM · #
1. why’ya lumping history with philosophy and philosophy with literature? lame cheap shots dude.
2. i think your emphasis on controlled experimentation for the social sciences (as opposed to developing business strategies) is misplaced. Most scientific knowledge is observational data, i.e. there’s this thing we’ll call the heart in the chest, which is reddish and pumps blood; iron doesn’t burn, it melts; there are a lot of bamboo forests in Asia; the dollar is worth so-and-so versus the yen. This stuff is important regardless of whether there’s a good theory that makes “useful, non-obvious and reliable prediction rules.” And of course the theory depends on the data.
3. here’s what i think is the fundamental issue. Like the other social sciences, economics does not lack for theories. What it lacks is observational knowledge. Nobody really understands just what the economy actually DOES—i don’t mean an explanation of how it does it, but just what it does, or what parts it has. this is the state of biology circa 1600, when genius savant Descartes still thinks that the heat of the heart causes all body movement. later on some guy points a microscope at human issue and discovers cells. Cells! 4-year-old kids nowadays know about cells. The economy is as opaque to us now as the body was to Descartes; Descartes could theorize all he wanted, but what he really needed was a microscope.
i think the best case scenario for economics is to become like medicine (which is a very different sort of discipline than physics or even evolutionary biology). But to get there we need a lot more facts. We need data on how money actually circulates and what people do with their time and what their relationships are with each other. Ultimately we’ll need to integrate economics, political science, sociology, etc. together, as different systems all operating alongside and interacting with each other. It’s gonna be a long time coming. But that is the path forward, not “Popper is my homeboy” (i don’t think popper himself would even agree with you—he explicitly advocates for data-driven social engineering in The Open Society and its Enemies).
— raft · Feb 3, 08:21 AM · #
Jim writes:
“In fact, you can see debates in mature sciences that sound a lot like the one that Will describes; they just tend to be around frontier issues.”
Yes, but in a science of competitive human behavior, such as economics, all the interesting debates, such as over the $800 billion stimulus, are around frontier issues. We haven’t given out $800 billion stimuli under the current conditions before. Conversely, there are a lot of elements in economics that aren’t debated, such as the wisdom of raising the minimum wage to $100 per hour. That’s not interesting any more because everybody is convinced by economists that it wouldn’t be a good idea.
Moreover, a science of competitive human behavior is going to have more frontier issues than a science dealing with unthinking objects like protons because humans learn and change their responses. For example, in the 1960s, the Philips Curve theory of having some inflation to cut unemployment, worked pretty well. In the 1970s, it stopped working as people figured out they were being ripped off.
Thus, the things we are most interested in will be that which will be most hard to predict.
— Steve Sailer · Feb 3, 08:35 AM · #
It’s somewhat tangential but I’m hating your definition of science — and indeed your concept of science as laid out in the article — because I sort of think mathematics isn’t science (and it meets your definition, but it falls down because it’s all internal: you’re making nonobvious predictions which are validated entirely within the theory itself) and I totally, for sure think engineering isn’t science (and in fact your post reads to be, really, about engineering, not science). In fact your airplane analogy shows that well: long after people were flying planes everywhere and could make decent planes there was still a lot of discussion about whether what the wing was doing had more to do with Bernoulli (lower pressure above the wing from faster airflow) or Newton (redirection of flow downwards) and even now cutting-edge airfoil design often relies on observation-based fluid flow models that don’t always have the theory behind them. As with drug design, engineering leads science and it’s only pretty late in the game that people can try to apply basic fluid mechanics to make a new aircraft de novo or to fit a drug into a protein, and even then those guys tend to get their lunches eaten by the empiricists. Science isn’t, as this post implies, exactly about “what works” — that’s engineering, and most often we have tools (or drugs) before we actually have the theoretical infrastructure to understand how they work (go figure).
Nor is it fair to say that science is “non-obvious.” For one thing, a lot of science is codification of the obvious or drawn observationally (this is why Einstein said thermo alone couldn’t be overthrown: because things like the conservation of energy really didn’t have any theoretical justification but were based on large-scale, repeated observation: this isn’t theory, it just is). A lot of science is figuring out how to take something that is observationally clear and figuring out how to codify it into the system (think Noether’s theorem: it’s sort of obvious that a symmetry in a system implies that some mechanical property of it is conserved, but, damn, it’s a pain translating that into predictive equations of motion.)
Science has much more to do with fitting observational and practical data into abstract, theoretical frameworks, which are then validated by further observation and experiment (but are often still useful even if not so validated). From that perspective economics is very much a science. What it isn’t, is engineering: it’s damn hard, as far as I (or Will?) can tell, to use it to design something (like a stimulus package).
— Sanjay · Feb 3, 02:22 PM · #
“Thus, the things we are most interested in will be that which will be most hard to predict.”
Steve makes a good point, and one not out of line with what Jim is saying. The tricky thing about the application of an experiment in a complex environment like the economy is identifying how everything will react, and to do so in law like fashion. For example, while I like stuff like systems dynamics a lot, my concern with it is that it posits so many relationships, all operating interdependently, that the misspecification of one makes it so liable to tipping the system out of balance. Again, I think these can be great tools for identifying relationships post-hoc, but limited with their ability to accurately predict given limited information, and the way social laws ‘seem’ to act in law like manner in more fuzzy ways than natural science.
Again, Jim, I don’t know the tools that you guys use, but those are just some off the cuff thoughts.
To play my own devils advocate, a key question is what is the alternative. If we have an option between becoming more aware with preliminary lab designes v. just doing off intuition completely, I think we should go with the former approach.
My guess is that these tools can explain more variance in the desired DV than nothing (or intuiton, or gut, etc.), I just think our level of confidence should be in proportion to this identified amount of predictive explanation.
— Peter · Feb 3, 02:27 PM · #
You’re bucking for an “American Scene Crashed and Burned” headline over at “Grasping Reality with Both Hands.”
— Boz · Feb 3, 02:54 PM · #
Rortybomb (I see you’ve given into convention, and are capitalizing your “name”):
I agree, and tried to emphasize, that I think the big issue that will limit the utility of this science will be generalziation from experiments, and I think naive boosters of it are ignoring this issue at their peril. As a very crude observation, in repsonse to your question, we have developed meta-rules for generalization of experiments across time (the most foundational issue), culture and other dimensions that for the limited case of business tests have proven themselves to be useful. Of course, even these are contingent.
— Jim Manzi · Feb 3, 04:20 PM · #
Raft:
1. I’ve got more cheap shots than a frat hosue on a saturday night. I’m so full of cheap shots, they call me the Jaegermeister. I can on like this all day. But lame? – No.
2. I believe that most useful science, and what I was talking about here, proceeds from observations to causal rules.
3. “Knowing what the economy does” is, I think, a rough way of saying “moving from observation to causal models”. I think medicien is a good analogy – in fact, what I’ve described as business experiments are really very similar to clinical trials.
— Jim Manzi · Feb 3, 04:32 PM · #
Boz:
I’ve been insulted in print by Brad more times than I’ve done cheap shots.
— Jim Manzi · Feb 3, 04:39 PM · #
I think I can imagine how you could design an experiment that would yield some useful data on a business case, but I’m having a hard time seeing how you can design experiments that illuminate issues of macroeconomics. How could you experiment with world wide currency flows for instance, or a national currency deflation?
It seems like the science most analogous to economics is meteorology, if each individual water droplet had a mind of its own. And how do you experiment on the weather? What they basically do is what it seems like economists do: make observations, create models, then see how well the models predict real world weather.
But this is for sure not my field (neither is meteorolgy). What kind of experiments could you do that would give some useful information about macro economics?
— cw · Feb 3, 05:11 PM · #
cw:
I agree. This is what I was trying to say by posting that I didn’t think the macro issues under debate would become science-like within the foreseeable future.
— Jim Manzi · Feb 3, 06:02 PM · #
Simpler Will:
The normative epistemology of economics is immature. Cultivating it requires curiosity, disciplined readjustments, and logical consistency. Success in this endeavor will be selected for.
— JA · Feb 3, 06:06 PM · #
“Yes, but in a science of competitive human behavior, such as economics, all the interesting debates, such as over the $800 billion stimulus, are around frontier issues.”
True, but the problem is that 70 years later, macroeconomists don’t agree on what happened in the Depression and whether the New Deal was too much Keynes, not enough Keynes or whatever.
I’m sure many professional economists are very bright people and have interesting insights into human economic behavior, but as Jim rightly points out, that’s all they have: insights. There is no truth there.
— Steven Donegal · Feb 3, 09:16 PM · #
On a related note, my brother works as a guard for a prison (26th/Cali for the Chicagoans), and if I had actually done the PhD in Economics, I was going to try and float the idea that I could (exogenously) inflate the number of cigarettes and cans of mackerel that they use as currency in the cellblocks to observe what happens to inflation. Are the wages sticky?
I can only assume nobody would have let me do that. But how cool would that be?!?
— Rortybomb · Feb 3, 10:04 PM · #
They use cans of mackerel?
— JA · Feb 3, 10:45 PM · #
Rortybomb:
About as cool as it gets.
JA:
Yes, there was this really interesting newspaper article about it a few months ago that was going around the blogosphere.
— Jim Manzi · Feb 3, 11:35 PM · #
Found it, thanks!
— JA · Feb 4, 12:02 AM · #
By the way, Procter & Gamble, the most respected consumer packaged goods firm, cut back on their rate of doing giant BehaviorScan test markets in the late 1980s after doing about 50 them because they felt they had learned some general rules about business and marketing. One that I can recall is that new and/or larger ad campaigns mostly work only if you have something new and interesting to say about your product — e.g., “Crest has a new breakthrough ingredient that removes from your teeth what the American Dental Association calls “the tragedy of scuzz,”“ or whatever. For an established brand, however, just increasing brand awareness advertising seldom is profitable.
Indeed, the BehaviorScan results showing scientifically that increased advertising only occasionally worked were likely one factor in the recession that his TV advertising in the otherwise prosperous mid-late 1980s.
I’m often struck by how few economists have ever heard of BehaviorScan.
— Steve Sailer · Feb 4, 01:58 AM · #
Sorry, Jim, you’re looking for the keys under the lamppost again.
The key observation is that none of the important policy-related questions in economics are even remotely accessible to inductive experiment. Moreover, the trend you describe was new at the beginning of the 20th century, not the 21st. It is rediscovered over and over again, and fails each time. I’m sure you’ve read Milton Friedman’s ’50s paper on “positive economics.” How’s that monetary policy workin’ for ya now?
Today’s macroeconomic models have been choked senseless with data series. They are so full of numbers, when they cough little clouds of digits spurt out of their ears. As you yourself note, all of them were decisively falsified by the events of 2008.
The future of economics? Not induction, but deduction. Not data, but logic.
— Mencius · Feb 4, 07:45 PM · #
Mencius:
As per the post, macroeconomics will not for a long, long, long time be addressable by induction and experiment. There’s a lot of economics that is interesting and important that is not macro.
— Jim Manzi · Feb 4, 08:15 PM · #
Jim,
Why do you have to say “a long, long, long time?” Why not just say “never?” You leave the impression that we are somehow working toward this impossible goal. It’s like saying that levitation will not for a long, long, long time be addressable by Transcendental Meditation.
The “economics that is not macro” is extremely well-understood. Is there any controversy about microeconomics? To the extent that anything in it is “interesting and important,” it is mostly marketing research of the sort that Steve Sailer describes. The tragedy of scuzz!
Furthermore, American business already has gone as far as it could toward scientizing the management process, and then stepped back – perhaps you’ve heard of Robert McNamara? Every twenty years or so there is a craze for quantitative decision-making. It never lasts.
The world has no shortage at all of McNamaras – what Carlyle called the “beaver intellect.” It never has enough Steve Jobses. (I’m always amused by hearing the likes of the New York Times praise Jobs for his “autocratic management style.” Ah, democracy.) Aristotle called the Jobsian quality phronesis. This, too, is our past and maybe our future.
Speaking of Carlyle, I think you’d enjoy grappling with his Chartism, especially the chapter on Statistics. If you have a good answer to it, you’re a better man than anyone else since 1840.
— Mencius · Feb 5, 12:22 AM · #
I only qualified the statement to recognize that nothing can be absolutely ruled out.
I agree that there will never (or at least not for a long, long, long time) be a “science of management”. That doesn’t mean that we can’t run experiments to establish specific causal results.
— Jim Manzi · Feb 5, 04:03 AM · #
Plenty of things can be absolutely ruled out. Such as flying through meditation. The way you introduced your post gave readers the impression, IMHO, that the goal of scientific macroeconomics was difficult but incrementally attainable. I apologize if this was not your intention.
As for experiments, sure – but the “looking for the keys under the lamppost” analogy is one I’d be aware of. (In case anyone is not familiar with the joke: a drunk loses his keys, then later is observed looking for them around the base of a streetlight. Queried on his methodology, he replies that sure, he could have lost his keys anywhere, but he is looking for them where it’s light.)
Yes, we can run some experiments to establish some results – as Steve’s delightful example makes it so clear. Yes, there is a streetlight, yes the light is on, and yes the keys might be under it.
However, at least as compared to other cultures in history, ours is in no danger of forgetting the value of the scientific method. It is in some danger of forgetting the fact that sometimes, when you lose your keys, they are not actually under the lamppost – and yet you still need to find them.
A fine example is the disaster of the credit rating agencies, which are of course effectively part of USG. One of the many contributing factors to our little perfect storm was the idea that useful credit ratings could be constructed mechanically and scientifically, with purely quantitative methods- “by steam,” as Carlyle would have put it. With no messy, personal phronesis needed. How’s that workin’ out for ya?
Of course, this is exactly how USG prefers to “set public policy” – nothing can possibly rest on personal judgment. There must be a process, preferably as scientific as possible. (Although, if you’re a New Deal fan or even if you’re not, you might be amused by the story of how FDR set the gold price. “Because it’s three times seven.” Maybe there’s a reason we don’t want these people making actual decisions.)
The credit-rating agencies (NRSROs), written into the law of bank regulation as they were, had to be treated as makers of public policy. Implying that however these AAA ratings were contrived, it couldn’t be by some sweaty old white guy with a green lamp. And presto: science to the rescue. Of course, it wasn’t actually science in the Popperian sense, but how many philosophers of science do they have at the New York Fed?
This pattern of systematic, thoroughly-unjustified scientism is why Carlyle’s message, despite being 169 years old, still seems novel and fresh. You’ll note that Carlyle is responding to the Benthamites, who shared your optimism about the potential of social science. And indeed Bentham is still with us as well – though I would not call him fresh.
— Mencius · Feb 5, 05:55 AM · #
Mencius:
I obvously mis-communicated. I meant this as an indictment of the whole paradigm of current social science.
This relates to your lamp post story. My point (to put it a bit over-strongly) is that the only place we can look is under the lamp post, that is, where we have experimental results we can answer questions, but questions that are out there in the dark are always (or at least for a long, long, long time) going to stay dark. We want scientific answers to them, but we aren’t going to get them.
— Jim Manzi · Feb 5, 04:13 PM · #
Then we disagree on only one point. You say we want scientific answers to these questions. I say we want reasonable answers to these questions – the scientific method being one, and only one, way of reasoning.
Science is a special case of reason. And a very useful case it is – but only one. Science works, as Popper explained, because it is reasonable. The fallacy is to assume that because science is reasonable, everything other than science must be unreasonable. Socrates is not a cat; a dog is not a cat; therefore, Socrates is a dog.
But in our science-mad age, in which the scientific method is perceived as the only legitimate process by which the State can make decisions, most accept this fallacy. So today we have a government which can crunch data like no one’s business, but which cannot, otherwise, think.
Allow me to present a small illustration of the danger of lusting in one’s heart, as Jimmy Carter put it, for the impossible sex fantasy of Popperian public policy.
Consider this fascinating bit of social science [PDF]. I’m confident that anyone whose knowledge of history predates 1960 or so will find the author’s use of the word “counterintuitive” quite amusing. (Not to mention “treatment,” which is either epidemiological jargon, or a translation from the Russian – or, of course, both.)
Of course it is perfectly intuitive that collective punishment is an effective “treatment” for collective misbehavior, and in fact it has worked perfectly well since Caesar was a little boy. But as the author points out, it is the converse theory, which is both counterintuitive and obviously wrong, which 20th-century “social science” has proved via a stack of papers as tall as the World Trade Center. To us it is apparent that, since the social science that produced these papers was in fact cargo-cult science, the authors’ efforts served only to transmit their own moral prejudices into the scientific literature, and thence to government policy. The result: no more World Trade Center. I exaggerate – slightly.
Now, the pendulum swings back. The counterintuitive lie has become old hat, and no one can make his name retelling it. So Professor Lyall uses the tools of social science to propose the counter-counterintuitive truth. Perhaps the social-science community will embrace his hypothesis, and truth will win out in the end. But really, man, did we need to go there? (If you want to see the intuitive truth in its native habitat, and De Bello Gallico is just too old and weird for you, try this [PDF]).
A more familiar example of this pathology is the “broken-windows effect.” Again, in this case, social scientists constructed a theory – that disorder is contagious and self-catalyzing – which conflicts with the consensus of 20th-century social science, that the best way to suppress crime is to cosset and caress it. Again, the new, counter-counterintuitive theory is strangely identical to the old, reactionary, phronetic, intuitive theory, as found for instance in Peel’s principles.
(Indeed, the story of England in the 20th century is a convenient experiment in the transition from intuitive to counterintuitive: crime rates, offences per capita known to the police, rose by roughly a factor of 50. Not 50%, 5000%.)
Obviously, we’ve relearned a lot about policing and perhaps even a thing or two about artillery. Economics, however, remains firmly lodged in the hands of cargo-cult science. Consider this comment, which I found on a well-known financial blog – the author is a liberal, but quite trustworthy:
“The key discussions are held among the economists, with the principals not even able to follow much of the conversation. No joke, an economics prof who is deeply interested in politics and has a lot of students in policy roles, (including a key Obama player who will go unnamed) says it as if the pols are surrounded by theologians who speak only medieval Latin, and defer to them on matters technical (which are nine times out of ten the ones that really count).”
This is the fruit of “wanting scientific answers”: a new era of Scholasticism. Thanks for nothin’, Voltaire. Is it sufficient to grudgingly renounce this evil? Or must it be actively opposed?
— Mencius · Feb 5, 06:51 PM · #