The American Scene

An ongoing review of politics and culture


Articles filed under Economics


An Equanimous Take On eBook Price Controls

I swear I tried to be as “fair and balanced” as possible here, but I’ve always been of the opinion that objectivity not only does not preclude, but in fact sometimes requires, taking a stand.

Thus, I present you my case against eBook price controls.

Not Your Father's (Relative) Decline

I’ve been meaning to write something about this topic for some time, a follow-up to something I wrote a year and a half ago, about Obama’s (and America’s) tricky role at a moment of national retrenchment:

The Obama Administration’s situation may be compared with that of the Nixon Administration. Both Presidents were trying to manage a period of retrenchment in foreign affairs, dealing with a situation in which American influence and leverage had significantly contracted, and facing the prospect of further contraction that needed to be carefully managed. They were also both dealing with a period of traumatic economic change (accelerating inflation in Nixon’s case, a near-depression in Obama’s); with foreign wars that they did not initiate but had committed to winning and, in some manner, escalating in order to win (Vietnam, Afghanistan); and with a radical change in the global currency regime (in Nixon’s case, the demise of the gold standard; in Obama’s, the coming demise of the dollar as global reserve currency) – all of which provides some context for why each period was a period of retrenchment.

We should expect that there are going to be a lot of “concessions to reality and common sense” in the next few years, and the frustrating part is that we’re not going to get anything obvious for these concessions. Russia, for example, is going to keep pursuing its interests – and the aggressiveness with which they pursue them will probably mostly relate to their internal political situation rather than their perception of either American “will” to oppose them or American “goodwill” towards them. That’s going to make it very easy for the administration’s political opponents to make the argument that “if you give ‘em an inch, they’ll take the yard” even if no actual yards are literally taken.

It will be interesting to see how President Obama handles the tricky domestic politics of the tricky international situation he finds himself in.

I stand by that analysis, but I wanted to point out that the larger context of “relative decline” talk today is quite different from the superficially similar talk from the 1970s, and it’s worth highlighting that difference.

Declinist talk in the 1970s (which continued well into the 1980s) took place in the context of three trends that raised questions among serious people about whether the American system was “winning the future” as you might put it. Those three trends were: the apparently robust expansion of the Soviet bloc through the 1970s, culminating with the Nicaraguan revolution and the Soviet invasion of Afghanistan; the rise of OPEC and the huge spikes in the price of oil in 1973 and 1979; and the rise of Japanese manufacturing prowess (American electronics manufacturers began to collapse in the face of Japanese competition in the 1970s). All three trends raised the specter of real pessimism about America’s prospects. Communism was antithetical to the American way of life; if Communism was on the march, then that implied real deficiencies in our own system. America had been blessed with abundant natural resources, oil being one of the most crucial; if we were now reliant on imported oil, what did that mean for the viability of the American system? Particularly when the Japanese, who had few natural resources of their own, appeared to be eating our lunch following an apparently mercantilist economic model. (To these trends, one might add arguably the most important social trend of the 1970s, the enormous rise in urban crime.)

Projection of American decline, then, was pessimistic; a prediction that our way was failing, and that foreign ways were (increasingly) more successful.

But that is only peripherally the case for declinist talk today. Rather, today’s projections of relative decline are based on optimistic projections – specifically, projections that other major parts of the world will achieve economic and political success by copying us. China has not only abandoned Communist economic policies, but to a very considerable extent its economic success reflects a shift to a very familiar economic model (heavy government involvement in infrastructure development and education, coupled with a very entrepreneurial culture and an openness to foreign investment and integration into global manufacturing chains) – a much more familiar model than Japan’s, by the way. India has followed a different path, but their huge jump in growth has followed a liberalizing turn that was very much inspired by American economic advice. And in the political sphere, Europe has followed both American advice and the American model, evolving fitfully towards something resembling a United States of Europe. They may not get there – indeed, most people would say the odds are they won’t get there – but the point is that those same people would probably acknowledge that if they do get there, a truly united Europe would swing a whole lot more weight in global affairs.

For America to remain the global hegemon that it tries to be today, all these trend lines would have to go dramatically negative. China’s and India’s economic advances would have to stall out, leaving most of their two and a half billion people permanently mired in poverty. And Europe’s political project would have to collapse in failure as well. And such developments would represent not only failure for the human race, but failure for America inasmuch as the positive developments we’ve already seen on these fronts represent (in part) attempts to follow the American example and American advice. (Not to mention that they would represent a worse absolute result for American citizens than the alternative – we would have benefited from higher Chinese and Indian productivity much more than we would from their poverty, and a strong and united Europe would be a much more useful ally than a fractious and divided one, even if more independent-minded.)

A refusal to prepare for relative (not absolute) decline, then, represents some combination of a willful failure of imagination and a kind of jealous pessimism. America, in this way of thinking, isn’t an example and an inspiration to the world. Rather, America has some distinctive grace that the rest of the world lacks – and will always lack, no matter what they do.

That might, in fact, be what partisans of “American exceptionalism” mean when they use that phrase. But it’s a pretty ugly idea when you think about it.

A Moment of Communion with Paul Krugman

Steve Sailer was struck by the exact same passage as I in a profile of Paul Krugman:

Back in 2006, when he was writing The Conscience of a Liberal, Krugman found himself searching for a way to describe his own political Eden, his vision of America before the Fall. He knew the moment that he wanted to describe: the fifties and early sixties, when prosperity was not only broad but broadly shared. Wells, looking over a draft, thought his account was too numerical, too cold. She suggested that he describe his own childhood, in the ­middle-class suburb of Merrick, Long Island. And so Krugman began writing with an almost choking nostalgia, the sort of feeling that he usually despises: “The political and economic environment of my youth stands revealed as a paradise lost, an exceptional moment in our nation’s history …”

Krugman remembers Merrick in these terms, as a place that provoked in him “amazingly little alienation.” “All the mothers waiting to pick up the fathers at the train station in the evening,” he says, remembering. “You were in an area where there were a lot of quiet streets, and it was possible to take bike rides all over Long Island. We used to ride up to Sagamore Hill, the old Teddy Roosevelt estate.” The Krugmans lived in a less lush part of Merrick, full of small ranch ­houses each containing the promise of social ascent. “I remember there was often a typical conversational thing about how well the plumbers—basically the unionized blue-collar occupations—were doing, as opposed to white-collar middle managers like my father.”

To state the obvious, this is in many ways a profoundly conservative sentiment. Note the love of the particular, specific and local lived experience, and also the lack of conventional liberal observation (in this passage) of the greater racism of that era, or the conformity and sexual mores against which ‘the Sixties” rebelled. I think that seen in its best, and correct, light, what Krugman is expressing here is the desire that as many people as possible should have access to this kind of middle-class life.

I’m somewhat younger than Krugman, but as they say, the future arrives unevenly. I grew up in a small town with an experience not unlike this. I’m very sympathetic to Krugman’s choking nostalgia. It’s difficult to convey the almost unbearable sweetness of this kind of American childhood to anybody who didn’t live it.

The safety and freedom that Krugman describe are rare now even for the wealthiest Americans – by age 9, I would typically leave the house on a Saturday morning on my bike, tell my parents I was “going out to play,” and not return until dinner; at age 10, would go down to the ocean to swim with friends without supervision all day; and at age 11 would play flashlight tag across dozens of yards for hours after dark. And the sense of equality was real, too. Some people definitely had bigger houses and more things than others, but our lives were remarkably similar. We all went to the same schools together, played on the same teams together, and watched the same TV shows. The idea of having, or being, “help” seemed like something from old movies about another time.

Almost anybody who experienced it this way (and of course, not everybody did), intuitively wants something like it for his own children. The tragedy, in my view, is that, though we all thought of this as the baseline of normality, this really was an exceptional moment in our nation’s history.

My motivation in writing about political economy is, in some ways, much like Krugman’s. But rather than seeing that moment as primarily the product of policies like unionization, entitlements and high taxes, as is Krugman’s view, I believe that it was primarily the product of circumstance. We had just won a global war, and had limited competition; we had a huge wave of immigration, followed by a multi-decade pause; oil was incredibly cheap; a backlog of technical developments had yet to be exploited and scaled up, and so forth. We can’t go back there, at least not exactly.

This difference in diagnosis leads me to radically different views about what we should do now.

(Cross-posted at The Corner)

The Conservative Case For a Higher Gas Tax

I think Jim has made one, inadvertently, though I am, as always, open to being shown the error of my logic.

In my comment to Jim’s post from yesterday, I argued the following:

If the goal is to reduce oil consumption (for ecological or other reasons) then to the extent that demand [for oil] is elastic a tax should be efficacious in achieving that goal (because higher taxes should drive down demand).

If the goal is to raise more revenue in an efficient manner (or to reduce the inefficiency of the current code by offsetting the increased revenue with cuts in other taxes that have a bigger economic drag), then to the extent that demand [for oil] is inelastic the tax should be efficient in achieving that goal (because the tax will not cause material changes in demand, and therefore will not materially distort economic decisionmaking in aggregate – much as a VAT is considered highly efficient).

To the extent that a higher tax on oil has both goals (which is I think the argument most advocates would make), then the real impact of whatever the actual elasticity of demand for oil turns out to be is on which goals are more effectively achieved. The more elastic demand for oil proves to be, the lousier the tax will be as a revenue-raiser, and the more distorting it will be of economic choices, but the better it will be at achieving the ecological goals of the tax. The less elastic demand for oil proves to be, the lousier the tax will be as an ecological measure, but the more effective and efficient it will be as a revenue raiser.

Jim conceded this essential point. He believes the evidence points to the conclusion that demand for oil is not very elastic, and therefore a tax would not be terribly efficacious in changing consumer behavior or in driving innovation, at least not at any seriously plausible levels of tax. But for that very reason it would be a good – in the sense of economically efficient – revenue-raiser.

Jim makes two points against adopting a new carbon tax or value-added tax, however. First, he argues, it’s important to limit the number of taxes, simply because the multiplication of points of taxation makes it easier for the government to raise the tax burden overall, as none of the headline numbers seem terribly large. Second, a carbon tax or any other complex attempt to price the externalities associated with fossil fuel production and consumption would fall prey to special-interest pleading and would result in something full of loopholes, unlikely to make any material impact on the externalities it was intended to address, raising relatively little revenue, and introducing distortions into investment and consumption decisions that result in a material drag on economic performance.

It seems to me that, however strong you think these arguments are with respect to a carbon tax or a value-added tax, they are much weaker arguments against a higher gas tax. First, the gas tax already exists; we’d be raising an existing tax rather than creating a new one. Second, it is difficult to see how special interests could materially introduce exceptions to a rise in the gas tax – certainly relative to a value-added-tax or a carbon-tax. Or, for that matter, an income tax or a property tax.

Jim makes the point in today’s post that a solution to the problem of climate change will require either a massive drop in living standards or significant technological advances. He thinks the best way to achieve significant technological breakthroughs would be for the government to fund basic research. Such research would have to be funded, which would require additional revenue. Such revenue would, I should think, be better raised from more-efficient taxes than from less-efficient ones. I should further think that a tax that had a clear relationship to the problem being addressed would be politically preferable than relying on general federal revenues.

The bottom line: if consumer behavior doesn’t change when it is raised, then a gas tax is an efficient way to raise revenue. If consumer behavior changes greatly in response to small changes in the tax, then it’s an efficient way to reduce oil consumption – and the costs of changing to an economy less-dependent on oil may have been overestimated. Since the gas tax already exists, and is extremely simple, raising it poses fewer risks of special-interest capture than imposing a new tax. If the goal is to restrain government spending, a rise in the gas tax could be offset with cuts in other, less efficient taxes, for a net gain to economic efficiency.

A Gas Tax: Number vs. Words, ctd.

Yesterday, I wrote a post using data from Kevin Drum’s blog in which I argued that any feasible U.S. gas tax would be highly unlikely to free America from the need for sourcing massive amounts of oil from unstable regimes, or to make a meaningful dent in potential global warming damages. This is because, among other things, the price elasticity of oil is just not high enough. Ryan Avent at The Economist has written a good-faith reply that I think, ironically, reinforces this point.

Avent’s first criticism centers on long-run elasticity being much higher than short-run elasticity (essentially, that I can drive a little less this year if the price of gas goes way up, but that if it stays high over 10 years, I can buy a smaller car, and bring my usage way down). This is, of course, correct. It’s why I only quoted the long-term price elasticity estimates from Drum’s post. According to the IMF study on which these are based, “long term” here means twenty years.

Avent’s second criticism is that if one believes (as I do, and as I stated in the post) that the key to reducing ceteris paribus fossil fuels consumption in the U.S. is improved technology, “ then higher prices are a good way to encourage their development.”

They are a way to do this, certainly, but not necessarily a good way.

Start with a rigorous definition of “new technology” for this purpose. This doesn’t just mean geo-thermal powered rocket packs, but also things like better bus routing software, improved rail tracks, and more energy-efficient housing construction materials. Either consumers would or would not choose any one of these new technologies under current conditions. If you use a tax to push up the price of fossil fuels, and this changes the consumer decision calculus so that they are now willing to choose some alternative that they otherwise would not have, it is because you have foreclosed a choice they used to have that they prefer to any of the options that are available after the tax increase. Another way of saying this is that you have just lowered their material standard of living.

When I say “new technology,” then, I mean technical advances that create new alternatives that people would choose to employ instead of fossil fuel based alternatives at current prices. That is, improvements to material standard of living that also have the benefit of reducing fossil fuels consumption.

Now, to evaluate Avent’s argument that taxing fossil fuels is a good way to induce new technology, consider an analogy. Suppose that there is a chemotherapy drug that increases 5-year survival rate for a specialized type of cancer from 10% to 60%, but with horrible side-effects. Some scientists in a couple of university labs have had some promising results with basic compounds that might or might not ultimately be precursors to a new drug that could get better increases in survival rates, and without many of the awful side effects. If you believed that improving treatment for this disease should be a major public priority, would your preferred approach be to add a large tax to chemotherapy? This is, in effect, what Avent is proposing as way to encourage the development of alternative energy technologies. I’d fund NIH research into the new alternative drug.

Finally, Avent argues that a gas tax is a great idea whether or not it really reduces fossil fuels usage, because even if I’m right and it won’t eliminate that much oil consumption, then it will be “a great way to generate revenue” (i.e., will result in a ton of additional tax collections). That is an entirely different argument, which would concede the point I was making in my post.

(cross-posted at The Corner)

A Gas Tax: Numbers vs. Words

There are lots of arguments that sound good for raising taxes on fossil fuels to reduce the threat of global warming and/or to reduce our dependence on foreign oil. Liberal, but analytical, blogger Kevin Drum points to a recent IMF study implicitly arguing that this idea is extremely unrealistic, because any such tax at any rate that has ever been discussed just won’t reduce consumption enough to matter. According to Drum’s interpretation of the IMF study, raising the price of oil 50% would lead to a long-term reduction in consumption of less than 2% in the developing world, and less than 5% in developed economies.

Drum can take some comfort from the fact that the kinds of analysis that produce such estimates are highly unreliable.

On one hand, you don’t need a lot of fancy econometrics to reach the basic conclusion that we could double the price of oil, and we’d still be carefully examining succession issues in Saudi Arabia. For example, it’s simple to observe that even really large, sustained price swings haven’t prevented amazingly steady growth in U.S. gasoline usage for more than half a century. Yes, people react to prices, but it’s hard to imagine that we could today impose a price high enough to get out of the structural problems of global warming (to the extent that you accept that) or our dependence on unstable regimes for oil.

And on the other hand, price elasticity in the future cannot be divined by such models. As the available trade-offs change, the price elasticity of oil will change. Specifically, to the extent that we continue to progress in making non-fossil-fuels technology cheaper and more effective for an ever wider array of applications, we can accelerate the ongoing de-carbonization of our economy. The idea of economists to use artificial scarcity pricing to do this is aggressively marketed in blogs, magazines and TV shows, but is extremely unlikely to work, because the current price elasticity of oil is so low. The work of engineers and physical scientists, however, is likely to be determinative.

(Cross-posted at The Corner)

PEG Leads, The Economist Follows

This may become a regular feature.

A Very Good Question about Our National Debt

On Friday, I published a blog post on the national debt with key points that can be summarized as: (1) the idea that we will have anything like currently anticipated entitlement payouts plus currently anticipated tax rates is a fantasy, (2) this gap is enormous, and represents the “mother of all bubbles,” (3) our debt situation means that we need to address it quite soon or face a funding crisis, and therefore (4) the correct primary metric for evaluating anybody’s plan to do this is what practical measures it puts in place now and how much additional time this creates for us prior to this crisis, rather than theoretical and unenforceable promises about the distant future.

Very smart TAS commenter “cw,” who almost always disagrees with me in a highly productive way, asks what I think is an excellent question:

So here is a technical question for Jim or whoever else can answer it: how much would taxes need to be raised to maintain our current entitlement regime?

The total present value of payments expected under Social Security and Medicare beyond what is expected to be collected under current tax laws is about $100 trillion.

One way to put that amount of money in context is to note that it is about twice the amount of all the net private assets that exist in America today.

To answer cw’s question directly, the best back-of-envelope estimate is that meeting this unfunded portion of the commitments implied by our current rules governing just Social Security and Medicare would require roughly an immediate 80% increase in federal income taxes, sustained forever.

That is one end of a spectrum. The other is to cut out $100 trillion of present value of anticipated entitlement spending.

I was at a dinner last year with about 15 well-known Washington think tankers, academics, journalists, bloggers and budget experts, entirely focused on the question of where on this spectrum we will end up. What was so striking to me is that as we went around the table, the majority of these people asserted confidently what would be politically feasible or infeasible positions. Many of these equally confident-sounding assertions were contradictory, and not shockingly, tended to line up roughly with each speaker’s political inclinations.

It would be simple for me as an economic conservative to dismiss the idea of a tax increase equal to an 80% increase in income taxes as politically unrealistic, but I’m not so sure about that. In the event of a crisis, I could easily imagine “emergency” income taxes on the “most fortunate among us” plus some increases in middle class tax rates, plus the introduction of a VAT, that got to something like that.

If you had asked me at a New Year’s Eve party in 2006 what I thought the odds were of the U.S. government taking a controlling interest the largest bank, the largest car company and the largest insurance company in America, I would probably have laughed at you. Yet within 36 months, this is exactly what had happened.

My friends who are more liberal than I probably should not make the analogous mistake of imaging that benefit reductions that seem absurd politically right now might come to seem not so absurd, and surprisingly quickly.

If you think about it, any real solution to the federal deficit problem is currently politically impossible, yet we know mathematically that, barring a productivity miracle, the situation cannot persist indefinitely. Therefore, we know that some change that currently seems politically impossible is all-but-certain to happen sooner or later. I have no idea what change will become politically feasible in the future, but then again, I don’t think anybody else does either, because it is not written in the stars – it will depend on some combination of events and political leadership.

(Cross-posted at The Corner)

Controlling the National Debt: Chess, Not Checkers

Paul Ryan’s widely-discussed budget proposal prominently features several charts designed to emphasize the severity of our budget problems. It’s easy to remember these long-term forecasts by using a lot of 80s: in 2080, government spending is projected to be a little less than 80% of GDP, and accumulated debt is projected to be over 800% of GDP.

The one thing that I can state with some confidence about these specific forecasts is that they are wrong. In the real world, something – either more prudent management of our federal fiscal affairs, or else a painful and destabilizing crash – will intervene.

The long-term forecasts, however, illustrate the crucial point that we are sitting on the mother of all bubbles. Many, probably most, Americans anticipate a stream of consumption that will be provided for them into old age by the government (i.e., other taxpayers). Unfortunately, most American taxpayers do not anticipate the kind of enormous increase in taxes that would be required to pay for this stream of benefits. One or both of these expectations will not be met. Americans as a whole are simply less wealthy, in the most useful sense of rationally anticipatable future material consumption, than they think they are. And the size of this disconnect is vastly greater than, for example, the size of the housing price bubble that just popped.

The only good news is that most of this unfunded spending anticipated under current law is decades away. This matters a lot. If we faced a total collapse of our economy 75 years from now, but could keep pushing the deadline out by about one year per year, it would remain theoretical.

But the bad news is that, in this instance, the combination of the debt we have just put on our balance sheet, plus the deficits that are scheduled to be created by these programs over the next 10 to 20 years, means that we need to confront this problem soon. We have increased public debt from about 40% of GDP in 2005 to about 70% of GDP by the end of this year; and we are projected to exceed the historical peak debt of about 100% of GDP (achieved at the close of WWII) within the next 15 years; thereupon we enter uncharted territory.

The borrowing capacity of the U.S. government is vast, but it is not limitless. At some point that nobody can predict, we will not be able to borrow enough to continue spending n the manner that current law assumes without extremely large negative effects. This is the way the bubble will very likely pop, if we let it come to that: A funding crisis for the U.S. government. This is very unlikely in 2011, or 2012 or 2013; but over 20 years, we are playing with fire. In essence, we don’t need to care that much about the projections for 2080, because long, long before that, we’re going hit the wall, unless we somehow address the problem.

The rational goal is to push the point of crisis out well beyond the current planning horizon. There is no “long-term solution” that can ever be achieved by any budget deal. In the end, the ballast in the entitlement and budgeting system that prevents it from going haywire in the long run is the good sense of the American electorate. That’s why neither the conservative emphasis on the character of the people, nor the progressive focus on maintaining social consent for the capitalist system, is entirely misplaced.

The ball we need to keep our eye on is not so much the theoretical ultimate cost, as how much time we have left before we crash. We should want a set of entitlement rules that we believe to be sustainable in perpetuity, but we need to push out the date at which we would have a crisis.

As a result, I think that a lot of the discussion of the Ryan plan, and of President Obama’s speech, has missed the point because it has been excessively focused on statements about what “we” will do many years from now. Setting a general direction and showing that it is possible to get tax receipts to equal expenditures is helpful in creating political momentum. It is also useful in that, on the margin, it can get lenders to the U.S. Treasury to be more comfortable with the safety of their investment.

But beyond this, long-term targets are mostly a sound-and-light show, because we can’t really control today what spending and taxation levels will be in 2065 or something, as each future Congress can change whatever it wants. These out-year targets are very much like proposed laws that would “guarantee” that carbon dioxide emissions will be 80% below today’s level in 2050. They will, unless and until that future electorate decides that they don’t want to forgo the economic consumption that this would require any more than we do in 2011.

A real plan to address our debt problems, then, should focus on two key elements: (1) putting in place mechanisms for influencing future legislatures that we cannot command, and (2) enacting structural reforms that will simultaneously encourage general economic growth as they do this.

One thing that helps to address the first requirement is to try to establish enduring public opinion, which is the primary real benefit of the debate and long-term targets that I referenced above, as this will hopefully have some effect on the political landscape confronting Congress for years. Another is to force some real spending cuts now, not just in the future, which affects both the future baseline and mindset. But the most important thing is to change the rules of the game. That is, to tilt the playing field so as to bias the system toward reducing deficits as compared to the current system each year. Of course, there are no absolutes – anything can be changed by a future Congress, or if necessary, via Constitutional amendment. The goal of the policymaker who wants to deflate the bubble is to make it much harder to do this than simply by passing a budget with X instead of Y dollars for some purpose. Paul Ryan is a skilled budget technician, and seems to me to have prioritized a number of features that will help to serve this purpose. This is the boring, nerdy-sounding stuff like “Create a budget point of order against legislation that would increase net mandatory spending beyond the ten-year window of the budget resolution,” and “Close the loophole that allows discretionary limits to be circumvented through advance appropriations” that form much of the guts of the plan.

The second requirement is that the plan should look hard for two-fers that create structures that both help the deficit problem, but also help growth in the long-term. One example of this is to get the tax code simpler and broader. Reducing or eliminating a large number of deductions (business and personal), convoluted subsidies and so on, not only helps with the short-term debt problem, but should help the economy grow over time by getting rid of distortions. This is a major emphasis of the sensible Bowles-Simpson report, and there are numerous (too timid in my view, but I don’t have to get elected to anything) instances of this in the Ryan plan, such as reform of the corporate tax structure, eliminating some agricultural and energy subsidies, and winding down government sponsorship of the mortgage lending business. Another example is decentralizing decision-making through exactly the kind of block grants of Medicaid and “voucherization” (in the conceptual sense, via premium support) of Medicare that Ryan has proposed. Rates and spending levels can always be changed back in the face of inevitable future “crises,” but structures are much harder to undo. Again, this can be used to get costs down short-term, but their major benefit is helping growth decades out. The silver lining of this crisis is that by being honest with ourselves about our current situation, we can hopefully establish a practical platform for future growth and prosperity.

Paul Ryan’s willingness to stand up and provide a comprehensive and understandable projection of balanced total inflows and outflows for the U.S. federal government is what marks him as serious about this problem; but the specific proposals to reduce spending now, to change the rules of the game, and to restructure the welfare state in ways that will encourage long-term growth, are what make the Ryan plan serious.

Ideally, these various strands would be combined by a national leader into an overall program that combines a long-term vision, sustained public support, changed rules of the game, and institutional structures designed for the 21st century rather than the 20th to partially lock-in a program over a long period of time. A great model for doing exactly this is the combination of FDR’s brilliant political messaging, the creation of powerful rent-seeking constituencies, and laws that set benefit changes on autopilot unless Congress proactively changes them (rather than, for example, demanding “zero based” budgeting for Social Security each fiscal year), that created the entitlement system that we are now trying to reform. All products of human agency have a finite lifespan. Like the FDR vision of the entitlement state that now appears to be in its death throes, whatever we put in place will eventually become antiquated and have to be replaced. Our job is to make sure that this happens 75 years from now, not 10 years from now.

(Cross-posted at The Corner)

The Econ 101 Case For France's Wealth Tax

As able TAS readers will know, I’m a pretty free market guy. Yet unlike most center-right French people I like France’s wealth tax. It just makes economic sense.

Yesterday I wrote on Business Insider why. I thought it might be of interest to this crowd.

I Don’t Think These Count

Economist Karl Smith and I have been going back forth about the nature of the assertions made by economics. In his most recent post on the subject, Smith says this:

I see in one of Jim Manzi old posts he is asking for anyone to come up with 14 non-obvious empirically verified and useful rules made by economists.

He then takes up the challenge by listing 14 propositions that he believes qualify.

The actual challenge in my post was this:

My challenge would be simple: please list 14 useful, non-obvious predictive rules that economics provides that have survived rigorous, replicated falsification trials. [Bold added]

Smith was presumably short-handing, which is fine, but it’s important to keep in mind the bolded terms. A predictive rule has the form “IF observable event X occurs, THEN observable event Y will follow.” Surviving rigorous, replicated falsification trials, means that the predictive rule has been tested by independent investigators attempting to disprove it, by using it to make difficult, measurable, real-world predictions, and seeing if it passes all of them.

A number of the rules that Smith proposes don’t seem to me to be, or map unambiguously to, a predictive rule of the form IF X, THEN Y. In this post, I’ll just focus on some of those that seem to be asserted predictive rules. I think there is a consistent problem with most of them: they are not falsifiable.

Consider Smith’s Rule 5: “An increase in the mass of citizenry will not lead to an increase in the proportional mass of the unemployed.”

What about the fact that between 2000 and 2007 the number of U.S. citizens rose, and the proportion of unemployed subsequently increased?

Of course, you will say: “That’s idiotic. The increase in the mass of the citizenry didn’t lead to the increase in the proportional mass of the unemployed. It was a coincidence.” But this counter-argument begs the crucial question: how do you know that?

I’ve picked an extreme example to make a methodological point. We all know that there was a huge economic crisis at the same time that probably had an enormous role to play in rising unemployment starting in 2008. OK, so should the corrected rule say that “An increase in the mass of citizenry will not be followed by an increase in the proportional mass of the unemployed, unless there is also an economic crisis of the following dimensions”? I don’t think that works, because I can come up so easily with another counter-example that did not occur during a financial crisis, which would then require some other exception. And so on. These aren’t isolated incidents; there are thousands of examples, of varying duration and geographical extent, of population increases followed by unemployment increases. Either you are able to generalize these exceptions into some (potentially probabilistic) rule that can be stated in reasonably compact form, or you’re writing a history book.

If you have built such a rule, then it can be tested; but you don’t get to rescue the rule after it fails a test by stating some new exception, and claiming that the rule is still “basically right.”

Rules like this, that embed terms like “leads to” or “causes,” typically will not really be falsifiable, because they beg the question of causality that falsification testing is meant to establish provisionally.

Consider, in the same light, Smith’s Rule 6: “The total flow of services available to the community cannot, in general, be increased by destroying some stock of assets. I.E. one cannot raise general living standards by breaking glass to give work to the glass maker.”

What about Western Europe between 1939 and 1970? Following the massive destruction of its stock of assets in WWII, Europe rebuilt to provide much greater flow of services to the community. The same problem exists for the counter-argument that “but for the destruction of WII, Europe would have been even richer in 1970.” How do you know that? The rule embeds the assumption of the answer in the form of “be increased by,” so counter-examples can’t disprove the rule.

Or consider Smith’s Rule 9: “Increasing the supply of medicine and vaccines to a preindustrial society will cause living standards to fall.”

What about Europe from before to after the industrial revolution? The supply of medicine and vaccines was increased, and living standards subsequently rose enormously. Same problem again: the rule simply assumes “cause,” and this precludes falsification.

(Cross-posted to The Corner)

Jim and Noah’s Excellent Adventure, Part 1

As always, Noah Millman has raised some excellent questions about what I’ve written. This post address the first part of his response, and I’ll try to address the second part in the next post.

Noah’s first question is whether, by the same logic applied to economics, I consider geology to be a science.

Let me start with the purpose of science as I see it. To quote Francis Bacon, from Novum Organum, the fountainhead of the scientific method: “the true and lawful goal of the sciences is none other than this: that human life be endowed with new discoveries and powers.” More specifically, the purpose of science is to create useful, reliable and non-obvious rules that allow us to predict the effects of potential interventions we might make on the physical world. Everything else – data collection, equations, laboratories, scholarly journals, tenure committees, all of it – are means to this end.

The method that science uses to do this resists formal algorithmic description, and varies somewhat by specialty and over time, but broadly includes hypothetico-deductive reasoning for building theories, and controlled experiments for testing them. The first third of my upcoming book is an attempt to describe this method, so I won’t try to do it here, beyond this very high-level description.

The fifth chapter of the book is titled “Science without Experiments.” In it, I distinguish between two kinds of fields that are generally considered science, but that do not rely much on experiments: (1) so-called historical sciences, such as geology or parts of evolutionary biology, for which most experiments are impossible in principle because these fields address only past events; and (2) fields which make forward predictions, but for which most experiments are infeasible, the principle example being the astronomical sciences. Without trying to recapitulate the whole argument here, my view is that to the extent that historical science contributes to the development of predictive rules that are subsequently tested and corroborated, it is scientific; to the extent that it does not, it is not. (Of course, we have to recognize that the route this contribution takes can be very long and circuitous.)

Noah goes on to say:

Economists can predict all kinds of boring things with great accuracy. Even things we’re interested in, they do well-enough at that people with money on the line – investors, business owners; not just politicians – rely on economic forecasts all the time. They aren’t perfect, but they are better than any available alternative. No, they can’t predict the things we’re really interested in – such as when the next recession will hit. But the geologists can’t predict the next earthquake either.

Here’s a very short excerpt from the book on this question (using the analogy to weather forecasting, rather than earthquake forecasting):

[N]ot much that is practically important about the overall development of the economy is predictable in the long-term. Short-term forecasts of complex systems can sometimes be made on a combination of “momentum” and the assumption of relatively simple set of causal mechanisms. Five-day weather forecasts can be made, for example, by more complex versions of the observation that the weather in Ohio today is highly correlated with the weather in Virginia a couple of days later because of consistent wind patterns.

“GDP will likely grow by 1.3% next quarter” kinds of forecasts that are better than naive forecasts (what meteorologists term forecasts with “skill”) are useful to have. Check. They give us material information about the near-term background against which our interventions will be executed. Further, it is often the “scientific-seeming” aspects of meteorology (lots more monitoring stations, satellites and super-computers, that collectively create a much more granular finite element model) and economics (analogously, larger data sets and more megaflops) that allow increasing skill. Check.

But because they lack a robust understanding of the extraordinarily dense causal network that governs the system in question (using “causal” here in the operational sense of the ability to make accurate predictions about the difference between the world in which we execute an intervention as compared to the counterfactual world where we do not), they do not allow us to make reliable, non-obvious predictions about the result of proposed actions. This is closely related to my fundamental criticisms of economics and other social sciences – not that they can’t do this at all, but they have extremely limited ability to do so, and many social scientists (or often, social science popularizers) make wildly excessive claims about this capability. We ought, in my view, to premise our political economy around an accurate understanding of our ignorance.

At a certain level, it seems to me to be an enormously obvious case of the emperor’s new clothes that when you go from a discussion at an abstract level like this to a very practical level – “OK, please state the rules developed by economics that can actually make useful, reliable and non-obvious predictions about the result of alternative proposed courses of government action on the issues of the day” – you get a pretty underwhelming result: non-falsifiable statements, which 10% – 20% of the practitioners in the field dispute. There are obviously debates in frontier areas of all science, but at the level of things that can be said in a freshman textbook, there is also a body of agreed-upon causal laws that, crucially, have been converted to a body of engineering praxis that can be used to solve real problems. In all of the back-and-forth with economics bloggers on this question, I haven’t yet been confronted with anything approaching F=MA plus a long, long series of experiments showing that this is an excellent engineering approximation at terrestrial scale and speed that is the basis for the following machines that will produce X effect based on Y intervention, or a list of approved drugs, derived from the following work in biology labs, that have shown in clinical trials that when you introduce the following chemical into the bloodstream of a human with the following measurable indicators of a disease state, it will change those indicators with the following distribution of change. When it comes to economics, where’s the beef?

Is Economics a Science?

According to Andrew Sullivan, I stand alone against an array of bloggers in arguing economics is not a science. At the level of semantics, I couldn’t care less what label is applied to economics. I think that the operational issue in front of us is what degree of rational deference we should give to propositions put forward by the economics profession. I believe that this question is not academic navel-gazing, but crucially important for our political economy. It goes to the heart of the case for broad-based freedom.

Ryan Avent makes the following argument in support of economics as a predictive science (and Matt Yglesias associates himself with it):

Economics is quite often effectively predictive. If the supply of one good is disrupted, economists can tell you with great certainty what will happen to demand for complementary goods and substitutes. If supply levels are known and research establishing elasticities has been done, they can tell you even more about what will happen. Their predictions will nearly always be right. And this is true for many aspects of economics. [Bold added]

I’ve done a lot of this. Suppose, to take a down-to-earth example of what Avent is describing, you wanted to predict the effect of “disrupting the supply” of Snickers bars on the sale of other candy in your chain of convenience stores. The “elasticities” here are how much more candy of what other types you would sell if you stopped selling Snickers. Once you know that, plus the costs of various kinds of candy, you can easily do the math to figure out how much more or less money you will make if you get rid of Snickers. The elasticities are the can opener – once you have them, the rest is just arithmetic.

The best, and most obvious, way to establish the elasticities, is to take a random sample of your stores, stop selling Snickers in them and measure what happens to other candy sales. This is the so-called scientific gold standard method for measuring them. Even this, of course, does not produce absolute philosophical certainty, but a series of replications of such experiments establishes what we mean by scientific validation. As I’ve gone into in detail elsewhere, even having correctly measured the elasticities for these stores on those dates, generalizing this into a reliable forward prediction rule remains much trickier for the convenience store chain than in physics or even biology, because of the extreme causal density of human social behavior.

But suppose you can’t run an experiment. As implied by Avent and Yglesias, you could go back into historical data for sales by product by store by minute for the past 36 months, and look at what happens to sales of other candy at stores that have stock-outs of Snickers. Unless you are very lucky, and happen to find a true natural experiment in which a near-random group of stores were cut off for Snickers but not other goods and not in conjunction with some other macro event – that is, unless the experiment as you would have done it occurred already – then you are left with some version of building a regression model to estimate the elasticity between Snickers availability and (to keep it simple) sales of all other candy as a group. You will include lots of “control” (there’s that word again) variables in your model, because there are lots of things – say, very bad weather or Halloween or weekend vs. weekdays – that might independently affect both Snickers availability and sales of all other candy. After you build an initial version of this model, it will occur to you that some specific other control variable might be important, so you get the data and include it. Even after you do this, you’ll still have some stores for which the apparent effect goes the wrong way, that is, at the same time you see a Snickers stock-out, you see sales of other candy to go down. Let’s further say that you’re state-of-the-art about this, and build a Bayesian shrinkage model to try to adjust for this problem. These are examples; you will find yourself making a long series of such adjustments, inclusion and exclusion of potential control variables, model-tuning and so on.

Once you’re done with all of this, you have an estimate for elasticity – how do you know it’s right? You had to make all kinds of judgments and assumptions. Suppose you didn’t think of including a specific variable, or making some adjustment? How do we know we’re not slightly off, but way off?

The best way is to run a controlled experiment: take Snickers out of a random sample of stores, and see if the prediction made by your model is correct. If you don’t do that (or maybe at a higher level of abstraction, if you have not run many experiments that validate your modeling method within a tight class of application like elasticity estimates for stock availability in convenience stores that lets you avoid re-testing in this case), then you don’t know whether you’re right or not. All you have is a very sophisticated theory. The cash nexus is the experiment that tests the theory.

This is the fundamental difficulty with economists’ pronouncements about the predicted effects of various government programs – there is no reliable evidence at the foundation of the inferential edifice. We usually haven’t run a sufficient number of (or sometimes, any) real randomized experiments to build reliable predictive rules for the effects of the proposed government programs that are the issues of the day.

This is why I think Avent is missing the point when he then says:

It’s important to note that because economists can’t always run their own experiments, there will tend to be more confidence about theories that focus on things which occur very often. Prices shift constantly, and economists consequently know a LOT about prices. Massive, global economic recessions occur about once a century. There is obviously a lot more uncertainty regarding the theories that describe these events.

As is Yglesias, when he says something very similar:

The fact that the economics profession can offer so little in the way of consensus guidance about dramatic, crucially important events like the panic of 2007-2008 is a huge problem and a very legitimate knock on the enterprise, but it doesn’t actually undermine the overall epistemic status of the discipline. The hope is that over time things improve. And, indeed, for all the horrors of the current recession it’s been managed much better than the Great Depression of the 1930s was. Progress is happening. The only way to make more rapid progress on the science of macroeconomic stabilization would be to have many more recessions so as to gather better data.

Having more data points doesn’t cure the problem. It only provides the opportunity to do so, not (or more precisely, not only) because we can then build more sophisticated regression models, but because this makes experimentation practical. Lots of sequential price transactions, for example, are repeated events that comprise an intuitive reference class within which we can run experiments to test theories and generalize them into practically-useful decision rules.

Both Avent (and by extension Yglesias) associate themselves with Adam Ozimek’s additional very different comments about the implications of being consistent about this kind of epistemic humility. I broadly agree with what Ozimek has to say on this score.

First, he says that consistency implies skepticism about global climate models. I agree, and have written about precisely this point. (Separately, I think that there is a firm scientific foundation for concern about AGW, not because of climate models, but because we know through replicated experiments and other means that CO2 redirects IR radiation. Ironically, once you accept that premise, the more uncertain you are about the precision of climate models, the more you ought to worry about AGW.)

Second, he says that this should imply skepticism about John Lott’s claims about gun ownership and crime. I agree. In fact, based on precisely this thinking, I wrote a very critical review of one of Lott’s books in which he used the same method that he employed in the guns-and-crime analysis to argue that abortion legalization increased crime. (I was equally critical in the same review of the opposite conclusion, using similar methods, in Freakonomics.)

I think that Ozimek’s demand for consistency is fair, and as I said at the start of this post, important. My upcoming book is focused on (i) making the case that this kind of epistemic humility is justified on the evidence, and (ii) trying to work out some of the practical implications of this observation.

(Cross-posted at The Corner)

Another Round of Millman and Manzi

Prompted by another post about the limits of what economists actually know.

Jim’s post is really two posts, making two very different arguments, so I want to tackle them separately – among other things, because I don’t really agree with the first, but find the second really interesting.

Jim’s first post returns to his earlier argument about how scientific economics actually is. Last time around I responded by comparing economics with medicine, and asked: is if economics isn’t a science, is medicine a science? Now I’ll try another tack: is geology a science?

Geologists can’t run controlled experiments, after all. We’ve only got one planet, and there are real limits to how much we can mess around with it. Moreover, much of the planet can’t be observed directly – and indirect observation is heavily theory-dependent; we infer what is going on below the surface based on what we can observe from the surface, plus our current theory about how the interior of the planet works. If the theory is materially wrong, the same observations could different things are happening.

And, as a result, geology gets all sorts of predictions wrong. For one example, geologists cannot agree on how much petroleum there is in the ground – and these disagreements have really material policy implications. For another, Japan has just suffered a catastrophic earthquake and subsequent tsunami – and that earthquake was not predicted. (Obviously, any geologist could tell you that Japan is earthquake-prone – but that’s very different from predicting the timing and severity of an actual earthquake.)

So: is geology a science?

I hope we can all agree that the answer is “yes.” But if that’s the case, why does economics fail the test of being a science, given that it has some of the same deficiencies?

Part of the problem, I think, is that with any science we take settled matters for granted and focus on the unsettled areas, but with the human sciences we somehow forget that any settled questions were, in fact, settled scientifically. Economists can predict all kinds of boring things with great accuracy. Even things we’re interested in, they do well-enough at that people with money on the line – investors, business owners; not just politicians – rely on economic forecasts all the time. They aren’t perfect, but they are better than any available alternative. No, they can’t predict the things we’re really interested in – such as when the next recession will hit. But the geologists can’t predict the next earthquake either.

Another part of the problem, I think, is that economic policy is value-laden, so there’s an emotional charge associated with it that doesn’t obtain with, say, decisions that are based on geology. So geologists may disagree violently with each other about, say, the likelihood that we have already reached “peak oil” production, and we recognize this for what it is: a debate between scientists on a question where there is no consensus yet. But when two economists disagree on whether a stimulus plan will work or not, we throw up our hands and say – these are the people who are supposed to tell us how to run our economy, and they can’t agree on something this basic? But it isn’t basic at all, and there isn’t anything “weird” about the fact of that disagreement.

Where I always end up with these debates with Jim is: how does his skepticism actually impact our decisionmaking? When Jim says economists are “more like historians than biologists” – what does that mean when the question on the table is “what should the Fed Funds rate be right now?” That’s a policy question that the Fed has to answer every time it meets. If we don’t make it primarily by weighing the views of trained economists, leavened with the views of market professionals with a kind of practical, under-the-fingernails appreciation for how markets will react to this or that decision – then how do we make it? And if he agrees that this is how we should make such decisions, then what is the practical impact of his skepticism, given that this is basically how these decisions are made now?

The second part of Jim’s post is more interesting to me. Jim makes a variety of points about the limitations of “economic man” as a model for human behavior even with respect to economics, and cites these as marks against economics as a science as well. Humans have irrational attachments. They are not always motivated by material self-interest. They do not always understand the utility of the institutions and habits that they themselves depend on. And so forth.

This argument is made frequently in various forms, and I’m curious whether actual economists generally dispute any of these limitations. Perhaps they do. But one hot field these days in economics is “behavioral” economics which explicitly attempts to see how limitations on rational cognition affects human economic decisions. This is, in fact, something you can run experiments on, and economists are running those experiments – and then disputing what the results mean.

Ultimately, this point is a complication to economics rather than an argument against it. I’ll take an example. New York City has a medallion system for taxis. The supply of taxis is artificially restricted by law. The argument in favor of such a system relates, basically, to quality control. Most economists would argue that this is self-serving – that, really, this is just a law to protect incumbents, and not a law that promotes “quality” at all, and that anyway maybe consumers would prefer lower prices to the higher “quality” taxi ride produced by the law. Jim’s caution, I think, would be to say: don’t assume you know how much of Manhattan’s commercial culture depends on the reliable quality of yellow cabs. Before you tinker with the medallion law on the basis of an economic theory that opening up competition will improve the consumer experience, try to determine the impact of such a change on the unique culture of Manhattan.

But either that’s something you can study, and try to measure, or it isn’t. If it is, then economists would seem to be especially well-placed to measure it. If it isn’t, though, then how do you know whether the argument in favor of the medallion system is, in fact, a good argument rooted in the importance of taxis for a specific culture, or whether it’s just special pleading by incumbents? Again, we return to this question of decision-procedure: once we’ve been properly cautioned that change may be dangerous, how do we actually make policy decisions?

Jim quotes Douglass North at the end of his piece, saying that neo-classical economics can’t tell us how to promote “development” because it’s all about allocative efficiency rather than adaptive efficiency. I will note, first of all, that “development” and “growth” are not the same thing, and that there are other factors that bear on human welfare – including material welfare – than development. If neo-classical economics does know something about how to achieve full employment, but very little about how to achieve long-term productivity growth, that doesn’t mean neo-classical economics is useless. Unemployment is very bad for the unemployed.

The insight becomes interesting if there is a material trade-off between allocative efficiency and adaptive efficiency – if, say, it could be demonstrated that using monetary and fiscal policy to try to achieve full employment actually resulted in lower long-term productivity growth. But, again, who is going to demonstrate this, if not economists? In the absence of a good scientific theory of what drives innovation – which, in turn, is the driver of productivity growth – we have no way of distinguishing a strong argument from special pleading.

It seems to me that Jim shouldn’t be arguing from a position of skepticism – that economics is insufficiently scientific. Rather, he should be arguing for the development of a more scientific understanding of precisely those factors that he thinks (but can’t prove) are relevant to development and that he thinks economics scants. If they are scientists, economists should be enthusiastic about any project of that sort – provided it is undertaken scientifically. And I suspect they would be.

UPDATE: See also here. It’s kind of funny that I’m defending the economics profession in debate with Jim Manzi, and here’s someone defending the economics profession in a debate with . . . climate scientists.

Common Misconceptions of Economists

Tyler Cowen did a matched pair of posts on what he believes to be the common mistakes of left-wing and right-wing economists. What seems so striking to me is not the difference between mistakes made by different kinds of economists, but rather what I believe to be the set of misconceptions that are endemic to the profession.

To begin, let me provide two caveats. First, everything that follows is a generalization about what I take to be the dominant tendencies of professional economists (and especially American economists working in academia and government). It is possible to cite counterexamples to each statement; in fact, every criticism I will make has been anticipated by canonical economists including Hayek, Coase, Knight, Schumpeter, North and Smith (Adam and Vernon), among others. Second, I am observing the economics profession from the outside as an entrepreneur and business executive engaged in the economy directly. I think that the last formal economics training I had was Evsey Domar’s Comparative Economic Systems seminar at MIT in the 1980s.

I’ve grouped these observations into two broad themes to provide some structure; but in order to get beyond very general abstractions, I’ve also tried to give some of what I think are the most important examples of each theme. At the conclusion of each theme, I’ve highlighted what I see as the negative result of these specific problems, from the point of view of a consumer of the outputs of professional economics.

1. Strategic elision between economics as predictive science, and economics as informed advocacy. Economists will sometimes make explicit claims that “the economic science says X,” and will more frequently make implicit claims for scientific knowledge by flatly asserting the known truth of some predictive assertion. This is normally a statement made around some specific policy question – we should (or should not) execute the following stimulus program; we should (or should not) raise the minimum wage right now, etc.

When pushed to provide the scientific evidence, they will normally reference some combination of empirical analysis of naturally occurring phenomena and mathematical models derived from axiomatic statements about human decision-making. In scientific terms, this is all sophisticated theory-building. What’s lacking is dispositive evidence of the accuracy of the predictive rule that allows the statement about this specific case to be an example of a more general rule that has scientific provenance. Otherwise, all we have is an informed opinion of the type we might have from an expert historian rendering an opinion about something the likelihood that Libya would revert to an authoritarian government within ten years if it overthrew Gaddafi

Among the most important manifestations of this problem are:

a. Lack of focus on controlled experiments as falsification trials. Theory and experiment are to science as inhalation and exhalation are to breathing. Even in scientific fields in which experiments are infeasible, our knowledge of causal relationships is underwritten by traditional controlled experiments. Astrophysics, for example, relies in part on physical laws verified through terrestrial and near-Earth experiments. Economics has traditionally been a consciously non-experimental science (though this is slowly starting to change). This creates a very weak feedback loop to weed out false belief. One can argue that controlled experiments cannot be done for many important economics questions. Fair enough, but then the claim to scientific status for these beliefs is hard to sustain, and leads to the next problem…

b. Ad hoc retreat to non-falsifiable “all else equal” arguments when confronted with apparently disconfirming evidence. The attempt to use predictions about future non-experimental events as falsification tests for beliefs tends to founder on what I have called the “causal density” of society. There is always some plausible excuse for why the prediction was wrong, but the theory is still right. The lack of control in most so-called “natural experiments” is deadly, but is often obscured by the next problem…

c. Hiding behind pseudo-technical jargon. All scientific fields have jargon, but not all fields with jargon create scientific knowledge. Normally, jargon must exist so that statements can be made precise enough to be falsifiable. As one layer of insight builds upon another in a given scientific specialty, jargon comes to include the role of explaining an ever-widening scope of phenomena with reference to prior insights. Jargon is a bug, not a feature: it is a necessary evil for specialists to make progress within a paradigm, but has the disadvantage of preventing non-experts from contributing meaningfully to the discussion. Without the discipline of experimental verification, however, this becomes more like philosophy than science. For economists, this can be a feature, not a bug, if it can be used to intimidate non-experts (generally those who are less comfortable with mathematics).

Result: The lack of a body of useful, reliable and non-obvious rules to predict the impact of proposed government interventions. As somebody who sits outside the profession, debates among economists are a means to an end. All I want is output: tell me the value-creating rules to predict the results of potential courses of action on the major issues of the day that your collective enterprise has produced that I would not have in the absence of your work.

Greg Mankiw, an economics professor at Harvard, is the author of one of the most widely-used economics textbooks in the world. In a chapter specifically devoted to arguing for the scientific nature of economics, he presented “a table of propositions to which most economists subscribe.” I found this pretty underwhelming as an argument for economics as predictive science: about 10% – 20% of economists apparently disagree with the central results of the field; half of the propositions are value statements concerning the way the world should be run (literally using words like “should”), rather than the kind of predictive rules produced by science; and even the other half, which are theoretically-falsifiable predictive rules, are mostly neither practically testable, nor specific enough to guide rational action. That doesn’t sound like any scientific field that I know about.

I’m not arguing that economics has produced nothing of value, but rather that its most useful outputs are more like those of historians than those of biologists. Draping the cloak of “science” over its findings can often be a rhetorical strategy designed to increase the leverage of economists in policy debates.

2. Use of a model for human mind, and by extension human society, that is simplified to the point of caricature. All rational disciplines, of course, must use abstractions that ignore some of the complexity of the real world. The question as a consumer of the work of the discipline is whether this abstraction supports or precludes the development of practically-useful guidance. The point of the first half of this post is that economics mostly hasn’t done this. It is my view that an over-simplified view of human mind and society is a key reason why not.

a. Ignoring the “irrational” psychological importance of group affiliation, and therefore under-emphasis on the role of institutions in promoting self-image rather than merely material self-interest. At the timescale of biological evolution, an extended commercial republic is brand new invention. While we have more just-so stories than legitimate scientific knowledge about the role of evolution in shaping human nature, one should expect that the faster pace of social change than of biological evolution is likely to create profound conflicts.

Any sustainably great collective – IBM, the Berlin Philharmonic Orchestra, the Pittsburgh Steelers, the U.S. Marine Corps, the University of Cambridge, or the United States of America – appeals to the rational self-interest of its members, but also creates a sense of irrational identification with the enterprise. Individuals within each will, to some extent and in some circumstances, sacrifice narrowly-construed perceived self-interest for the good of the whole. This kind of motivation is far more central to the lives of most real people than it is to most economic theories.

b. Many other examples include the mercurial nature of “utility,” the difficulty of forming commitment bonds across kin lines, and the central role of culture in creating economic outcomes. As is sometimes said in business “the soft stuff is the hard stuff.” Though not amendable to analysis, and especially not to quantification, the weird crevices of the human mind manifest themselves powerfully in our daily lives. In combination with the prior example, this tends to scale up to the dizzying nature of the institutions (in the broad sense of the formal and informal “rules of the game”) that determine the economic success and failure of societies. This is not ignored by economics, just radically under-emphasized.

c. Treating uncertainty as if it were risk. Under-emphasizing the complexities of the kind highlighted in the first two examples tends to lead to the problem of excessive belief in that previously-observed patterns are reliable predictors of future behavior. Repeated coin flips are complicated in the sense that we can’t normally predict heads vs. tails on a specific flip; but the series is still subject to probabilistic regularities, such as fair coin tosses should come up heads almost exactly half of the time. Human society is yet more complex, and patterns that seem reliable can suddenly change. This is frustrating to analysts, and therefore often ignored, or given only lip service, en route to making recommendations that rely on the assumption that these patterns will persist.

d. Ignoring the resulting complexities of the evolution of institutions over historical time. Our institutions are often mechanisms for organizing human behavior in light of human complexities, and for making decisions in light of true uncertainty. In an environment of true uncertainty, they have often evolved though trial-and-error, and therefore resist analysis. This is also frustrating to analysts.

Result: Excessive focus on allocative efficiency, at the expense of adaptive efficiency. I really can’t say this any better than Douglass North in his 1993 Nobel lecture: “Neo-classical theory is simply an inappropriate tool to analyze and prescribe policies that will induce development. …. It is adaptive rather than allocative efficiency which is the key to long run growth. Successful political/economic systems have evolved flexible institutional structures that can survive the shocks and changes that are a part of successful evolution.”

In sum, academic and government economists routinely overstate their actual degree of reliable, non-obvious knowledge about the answer to the practical question “What will happen if we execute policy X,” because it serves the class interest of economists to do so. Economists follow incentives like everybody else, rather than somehow sitting outside and above the process of buying and selling. Buyer beware.

The Financial Crisis Was a Rich-Country Affair

I haven’t read the book (yet), so I probably shouldn’t comment, but I’m puzzled by Matt Yglesias’ review of The Globalization Paradox, and here’s why.

Matt says:

The third book, which I wish he’d just done as the whole book, is about “financial globalization” and how it went very bad. These aren’t the terms he uses, but I think the point he’s making about this is that total lack of capital controls makes it very difficult to do effective macroeconomic stabilization so you end up with large output gaps so even if you assume that there’s some microeconomic inefficiencies associated with capital controls you still end up with more robust, more stable growth if you have controls . . . This seems very persuasive to me.

It’s very persuasive to me – but, if we’re talking about globalization, developing countries and the danger of attracting “hot money” then this is the lesson of Malaysia’s performance during the Southeast Asian currency crises of the late 1990s, versus the performance of Indonesia and Thailand. Right? Because the financial crisis of the last decade was overwhelmingly a rich-country affair. The 2000s were a great decade for the developing world – for China and India, of course, but also for Brazil, Southeast Asia, even for certain parts of Africa.

If you want to make an analogous argument about the perils of free movement of capital with respect to the recent crisis, you’d need to focus on small countries that got caught up in the housing bubble mania. In other words, this becomes a story about countries like Ireland and Greece and, hence, really a story about the Euro and what the costs and benefits are to a small country handing over control of its currency to a larger entity. (Positive: much lower borrowing costs; negative: no ability to monetize debt or otherwise inflate one’s way out of a recession.) That’s an interesting story, and one Matt has talked a lot about, but it’s not really a story about development economics because the Euro was, is and always will be first and foremost a political project, not an economic one, and because while Ireland and Greece were relatively poorer than the average European country twenty years ago, they weren’t developing countries, not in the sense that we usually use that term.

And if you want to talk about capital controls and development economics, the whole story is about China, and what benefits China gets (if any) from operating such an enormous and rapidly growing economy behind the wall of an essentially non-convertible currency. Obviously, the regime gets significant benefits in terms of greater control, but a sophisticated estimate of the economic costs to China of the existing currency regime would be useful to see. Because if China isn’t going “too far” in these controls, then “too far” is not a meaningful concept to talk about.

Would Universal Healthcare Coverage Actually Improve Health?

There is a debate going on in the blogosphere between Ezra Klein, Arnold Kling, Karl Smith, Tim Carney and others about, to put it crudely, whether health care really affects health that much. This is, in part, a proxy debate for whether it is worth it for the U.S. government to provide generous universal health care financing for all of its citizens (or, I suppose, residents).

Either position can be caricatured. On one hand, no sane person would want to be without the advances of modern medicine. Recently, a little girl I know had scarlet fever. A century ago, this would very possibly have meant burying a small corpse; today, it implies a 10-day cycle of swallowing medicine at breakfast and dinner. There are few people on Earth who have as much reason to be proud of how they spend their work week as pharmaceutical researchers.

On the other hand, the link from alternative methods of health care finance, through the actual differences in provision of medical care these imply in the contemporary U.S., to the actual differences in health outcomes these treatment differences would cause, isn’t nearly so obvious. The net health effect of providing universal health care coverage versus some alternative financing system is an empirical question, not a philosophy debate.

I’ve written a lot about why randomized experiments are so critical to understanding cause-and-effect relationships in social policy. In the case of health care financing, the reason is that what system of health care financing you have (high-quality “go to any doctor” plan; good HMO; catastrophic-only plan; VA; go to an emergency room because you are uninsured, etc.) is bound up with a myriad of other factors that influence health. A randomized experiment allows us to isolate the impact of the system of health care financing.

To my knowledge, the only large-scale randomized experiment in the U.S. that has tested the actual effects on health of providing various kinds of healthcare financing was the RAND Health Insurance Experiment (HIE). In this experiment, thousands of families were randomly assigned to one of five different health insurance plans that ranged from something like a plan that provides free health care, to something like a pure catastrophic-only plan in which consumers pay out-of-pocket for day-to-day healthcare. The study tracked what exact health care services each group used, and how their health varied over a period of 3 – 5 years.

Ezra Klein describes this experiment as “the best evidence we have,” and writes that it “suggests that health-care coverage does much more for the health of poorer people than it does for the health of well-compensated, highly educated people.” His statement is correct, but as a summary of the results of this experiment, seems to me to be radically incomplete. In fact, the experimenters wrote of the findings that “cost sharing reduced the use of nearly all health services,” but “the reduction in services induced by cost sharing had no adverse effect on participants’ health.” Think about that. Providing people coverage of their medical costs caused no average improvement in health.

Klein is correct that there appeared to be a net health benefit for the poorest participants, but this was for a tiny proportion of the population, and for a small subset of medical conditions. According to the study, “The poorest and sickest 6 percent of the sample at the start of the experiment had better outcomes under the free plan for 4 of the 30 conditions measured.” There are technical reasons why conclusions from such a experiment are not reliable for post hoc subgroups in the way that they are for average comparison of a test group versus a control group; but even if we were to accept this finding as valid, it’s not obvious to me that we would want to devise a health care financing system for the United States around helping 6% of the population partially ameliorate about 10% of their potential health problems, as opposed to developing some specific supplementary programs for these issues, if they could be addressed feasibly.

Klein clearly has a very sophisticated take on the issue, and wrote in 2009 that health care reform is not primarily about improving health, but in reducing how much we spend on it. As he put it, “The purpose of health reform, in other words, is to pay for health care — not to improve the health of the population.” Fair enough. But the real debate, then, would be about whether market forces or bureaucratic control would be better at reducing costs, not about which would be better at promoting health for the “poorest and sickest” or anybody else. It wouldn’t be about getting better health outcomes.

A single experiment like the RAND HIE is not definitive. Among other things: it finished in 1982, and we live in a different world; any such experiment requires replication; it might be that the important health effects take much longer than 5 years to materialize, and so on. But as an observer of the health care debates, it always struck me as fascinating that the fact that the “best evidence we have” showed that providing health care coverage doesn’t actually improve average health wasn’t treated as more central.

Fascinating, but not surprising. In one social policy topic after another, experts argue that some program will transform some area of public life, and solve persistent problems. They often have impressive theoretical arguments supported by complex empirical evidence. But what I believe randomized experiments have shown in many such areas – ranging from welfare to criminology to education – is that proposed policy interventions rarely work, and when they do, they tend to produce improvements that are very small as compared either to size of the problem or to the dreams of the advocates. This evidence is often ignored by those who have dedicated their lives to solving these problems, likely because it is so frustrating to almost everyone involved.

(Cross-posted to The Corner)

Questions For Scott Sumner

Apropos of this post.

If I understand his preferred policy correctly, the Fed should target nominal GDP growth, trying to keep that in the 5-6% range. If nominal growth is too strong, whether because of rising inflation or because of accelerating real growth, you raise interest rates to keep it in target; if nominal growth is too weak, whether because of too-low inflation or anemic real growth, you lower rates or, if rates are already at the zero bound, engage in quantitative easing.

Since nominal GDP growth is relatively easy to measure (certainly compared to inflation, the precise calculation of which we never stop arguing about), and since businesses are really making their investment decisions based on nominal, not real growth (if spending drops, they’ll sell less, and if it goes up, they’ll sell more – it doesn’t really matter if the change in spending was due to changes in inflation expectations or changes in real growth expectations), stable nominal GDP growth should make business decisions pleasantly predictable, which, in turn, should increase real growth (which is what we’re all aiming for).

But while it’s easy to measure in retrospect, it’s no easier to measure in advance than any other economic variable. Indeed, it’s arguably harder than some, precisely because we don’t really know what drives real growth moment to moment (if we did, economic policy-making would be easy). But the Fed isn’t going to go into completely reactive mode, and only move rates once we know we’re out of target – among other things, if it did that, by definition it would fail at its task of delivering “stable” NGDP. So if the Fed is going to target nominal GDP growth, it’s going to be targeting a forecast, whether produced by a model or derived from market instruments or (most likely) a combination of both approaches.

Okay. In 2008, we faced the first tremors of the financial crisis. The decline in housing was well underway, but the collapse of Bear Stearns was the first indication that Wall Street might have serious problems. The Fed acted quickly to shore up the banking system, opening the window to investment banks that previously couldn’t borrow there (because they weren’t regulated by the Fed). Then we had a summer lull – the calm before the storm that broke in September with the collapse of Lehman and AIG.

If I remember correctly, that same summer lull was the period when commodity prices entered the blow-off phase. Oil hit an all-time high. Gold was going crazy. Every commodity you could name was surging to unprecedented levels. The markets were full of fears that the Fed was being too loose in response to the housing-and-banking crisis, and that this was going to cause emerging markets (China, Latin America) that were linked to the dollar to overheat, and result in inflation at home. Investors ran to the perceived safety of physical commodities, the classic inflation hedge.

But it was right after this point that real GDP fell off a cliff, to be followed closely by nominal GDP as inflation expectations collapsed. In 2009-2010, inflation was negligible and we had negative real growth. We had entered the Great Recession.

So – again, if I understand him correctly – Sumner’s preferred policy would have required the Fed to ignore the market’s expectations of high inflation, and properly forecast the collapse in nominal GDP, so that they would know that just as the market was freaking out about rising inflation, what was really needed was much looser monetary policy.

Is that right?

I admit, to me it feels not very different from saying “if you can see a huge recession is coming, ease, and keep easing until you’re sure it’s not coming anymore.” Isn’t the whole game inside that “if”?

Sumner talks a lot about the need for a futures market to properly predict NGDP. But futures markets can be hugely volatile, even when they are deep. We have a reasonable proxy for NGDP growth right now – it’s called the stock market. Stocks’ value is derived from future dividends; those are paid out of future profits; and profits, in turn, are driven primarily by growth in nominal GDP. The old line is that the stock market predicted 12 of the last 5 recessions, but in August of 2008 the stock market was holding pretty steady. That’s August, 2008. A month before the beginning of the end of the world.

I’m intrigued by the contrarianism of some of Sumner’s advice. In particular – again, if I understand him correctly – his view is that in response to a supply-shock driven spike in prices, the Fed should lower rates, rather than raise them. Why? Because while these shocks do cause a surge in inflation, they hurt real GDP more – and, therefore, lower nominal GDP growth. So if we’re targeting nominal GDP growth, we should ease, and certainly not tighten, whenever oil or food prices spike sharply. That’s interesting!

But it still seems to me that a Fed policy based on nominal GDP targeting would be extremely volatile. The simple fact is that inflation expectations have been – for some time now – relatively stable. From 1984 to 1992, inflation didn’t budge very much from 4%. Then, in the Clinton years, it drifted down, gently, from 4% to 2%. Then, in the Bush years, it drifted gently upward to 3%, before dropping to 1% in the wake of the financial crisis. By contrast, nominal GDP flops all over the place. The deep recession we just went through was not a period of sharply falling prices. We didn’t have deflation. We had modest disinflation. What cratered was real GDP growth. So, again, it seems to me an NGDP targeting policy amounts to saying: ease if you know a recession is coming. But if we knew that . . .

Moreover, precisely because actual measured nominal GDP growth is so volatile, expectations for nominal GDP growth would also be expected to be highly volatile. If the Fed is going to target nominal GDP, it’ll have to be much smarter than the market (yeah, right), or I rather suspect it’ll be hunting a lot of wolves – both predicted booms and predicted busts – that the market is crying, but that aren’t really there.

But I welcome any corrections (if I have misunderstood him) or explanations why doing what he thinks we should be doing won’t be much harder than it sounds to me.

Why Aren't the Democrats Just Advocating Inflation?

There is a very credible argument to be made that the only pro-growth policy worth talking about is increased monetary expansion until the price level catches up with prior trend – in other words: what we need is a good bout of inflation.

The argument basically runs like this. American consumers are struggling under the burden of too much debt. They are trying to get out from under this mountain by saving, rebuilding their balance sheets dollar by dollar. But they have also been terrified by the financial crisis into extreme risk-aversion in investing, and only want to put their savings into ultra-safe instruments like government debt. Meanwhile, inflation is very low and unemployment is very high. In spite of low interest rates, money is actually very tight. Additional fiscal expansion can’t work if monetary authorities cancel it out by tightening money further in response to rising inflation expectations. So, to get demand for goods and services up, we need to reduce the attractiveness of holding money. You do that by “printing” money until people take their money out of the mattress and either invest it in risky ventures with the prospects of a real return, or spend it on goods and services. Higher inflation will also reduce the real burden of the American consumer’s debts directly. Bottom line: we need more inflation. Matt Yglesias among others has been beating this particular drum for some time.

The main argument I’ve made in response is that because America both runs the world’s reserve currency and finances its massive budget deficits abroad, we cannot afford to be substantially looser than other central banks facing similar economic conditions. If the Fed explicitly aimed for 4-5% inflation for the next few years, while the ECB and BOJ stayed where they are, and the Chinese central bank tightened and cut its peg to the dollar to avoid importing that massive increase in American inflation, there’s a material risk that the dollar would lose its global reserve currency status in a precipitous fashion. It’s going to lose that status anyway eventually – but losing it suddenly and violently would have a sudden and very negative impact on the borrowing costs of any entity doing business in dollars, including the U.S. Treasury. That rise in borrowing costs would more than offset any benefit from the higher inflation policy.

That’s not a certainty of course, and coordinated action among the main global central banks would make a huge difference. But if I were a Fed governor, I’d be very reluctant to take that risk, even in the face of 10% unemployment.

But I doubt that the political salience of inflation fears – and those fears clearly are salient; check out the popularity of goldbug talk – is driven by this kind of analysis. So what is it driven by?

It’s not memories of the inflation of the 1970s. Nobody remembers the 1970s anymore. If they did, the phenomenon of 1970s nostalgia would never have occurred. And it’s just “folk economics,” though I think that’s a part of it.

I suspect that the political salience stems in part from the knowledge, on the part of most working people, that they have little to no bargaining power for wages.

Since the 1970s, wages have stagnated for most workers. Arguably, total compensation has not similarly stagnated, because the cost of health benefits has skyrocketed, but I don’t think that this increase in cost is perceived as a comparable increase in value. Prices, on the other hand, have been more volatile. We’ve had periods of high energy prices and low energy prices. Periods of high food prices and low food prices. And for some categories of goods – electronics, most prominently – rapid improvements in quality have gone hand in hand with stable or even declining prices. Prices, in other words, appear to be something reasonably subject to policy influence. Wages, not so much.

If you look at that experience, and ask yourself, “do I believe inflation will benefit me?” the obvious answer is “no.” The rational expectation of the American worker is that higher inflation will mean a higher cost of living – higher prices for energy, food, rent – but that wages will not keep pace. In other words, higher inflation means a lower standard of living. Why would you vote for that?

Particularly when, if you ask advocates of inflation what it would do to real wages, they have to admit: it would cause them to decline. The theory of what causes recessions that animates advocacy for inflation is that recessions are caused by the stickiness of wages. When demand falls, you want wages to fall to a new “clearing” level that allows business to continue to employ the workforce. That doesn’t happen, though, because wages are “sticky” – hard to cut. So, instead, you get layoffs and reduced output, and the economy settles at a lower level of activity. Inflation enables real wages to be reduced without cutting nominal wages, so you get to the clearing level more quickly with fewer layoffs, and the recession ends.

If you accept this theory, then it is ultimately better for everyone to accept a short-term drop in the standard of living (a decline in real wages) in order to return to growth, which will ultimately bring wages back up. But if you haven’t experienced rising wages very often in the past, why would you buy this particular pig in a poke? Why wouldn’t you be more inclined to think: you’re telling me the solution to our economic problems is to reduce my standard of living? No thank-you. If I’m going to have to reduce my standard of living, I’d rather do it voluntarily by saving more money. And I’ll be keeping my savings in gold, thank-you very much.

How Can We Know What Public Sector Wages Should Be?

I wrote a post questioning the reliability of a study which claimed to show that Wisconsin public workers face a compensation penalty versus what they would make in the private sector. The motivation behind the study that I criticized is that comparing average compensation for public versus private workers is too imprecise to draw conclusions about whether public sector workers are overpaid, underpaid or paid about right. I agree with this. The method of the study was to control for this selection bias by observing how much more or less people tend to make when they have college degrees or not, are black or white, and so forth, and then to combine these adjustments together into a regression model that should hold these factors equal when comparing compensation for public versus private sector workers. It is this method that I criticized.

One technical problem with this approach is that it’s very hard to correctly adjust for these factors. Another, more obvious, one is that if it doesn’t include some factors that are important, it will be unreliable. This second critique was the one I made in my post. This kicked off a fair amount of back-and-forth and other commentary.

Two conservative scholars (Andrew Biggs of AEI, and Jason Richwine of Heritage) have now claimed that while the study in question was flawed, a better modeling method can be employed to show the opposite conclusion – that public sector workers are overpaid versus what they would earn in the private sector.

Here’s Richwine:

Manzi is referring to “the human capital model,” which holds that workers are paid according to their skills and personal characteristics, like education and experience. Most scholars—including Andrew, myself, and Heritage’s James Sherk—use it to compare the wages of the public and private sectors. If the public sector still earns more than the private after controlling for a variety of factors, then it is said to be “overpaid” in wages. But because we cannot control for everything, Manzi is saying, the technique is not very useful.

His critique is reasonable enough, but overwrought. The human capital model has been around for three decades, and it is unlikely that economists have failed to uncover important variables that would drastically change its results.

So, the reason I’m wrong is that labor economists have been building human capital models this way for thirty years, and it’s unlikely that any important factors would not be in the models at this point. Got it.

Here are the sentences that immediately follow in Richwine’s post:

Nevertheless, there are other techniques that address most of Manzi’s concerns. An upcoming Heritage Foundation report uses a “fixed effects” approach, which follows the same people over time as they switch between the private and federal sectors. By looking at how the same person’s wage changes when he moves between sectors, a lot of unobservable traits—intelligence, extroversion, etc.—are accounted for.

So, in fact, there are obviously important factors – like, say, how smart you are – that aren’t accounted for by the human capital model that I critiqued. They’re also apparently missing from other standard human capital models, because it requires a different “fixed effects” model to address them.

Biggs, in his post, describes more specifically what their fixed effects analysis does:

To address this, we did a second form of analysis that isn’t subject to Manzi’s objections. Rather than comparing different people at one point in time, this second approach – called a “fixed effects model” – follows the same people over time, specifically as private sector workers found new jobs which could be in the public or private sector. If workers get a bigger raise when they switch from private to federal employment than workers who switch from one private job to another, we can infer that the federal government overpays.

This sounds (and is) reasonable, but do you see the problems? First, the people who leave their jobs, voluntarily or involuntarily, are going to tend to be quite different from the people who stay in their jobs. So generalizing from this group of leavers to the broader population of workers will be hazardous, to put it mildly. Second, even within this group of leavers, while there are some characteristics about people that are fixed, lots of things aren’t. Maybe the nature of the average job in the private sector tends to get them to work harder (or easier) after the switch than the nature of the average job in the other sector. Maybe people tend to switch to government employment at a point in life when they wanted to start to take it easy (or work harder) anyway. Maybe switching to private employment disproportionately involves a move to higher-cost urban (or lower-cost rural) area. And so on, just as in the human capital model.

But this is just quibbling, right? Not really. Biggs and Richwine reference the classic papers on this method by Alan Krueger. If you go back to this work, how does Krueger deal with these potential selection bias problems? He builds a regression model with “occupation, human capital and demographic controls.”

Ah.

We’re right back where we started, in the sense that we have the same kinds of questions about, for example, unobserved sources of selection bias between the two groups not accounted for in the model.

As it turns out, I’ve built many fixed effects, regression and similar models for the purpose of estimating the effects of various changes in compensation, pricing and other similar behavior drivers. And in contexts in which accuracy can be validated, fixed effects models are almost always better than the kinds of static models that I originally criticized. The reason is, as Biggs says, that some of the unobserved differences between groups are accounted for by them. So, I agree that that for the purpose of estimating a public-private compensation gap, that it is a reasonable belief that fixed effects models are better than static human capital models.

And a motorcycle is better than a bicycle for jumping the Grand Canyon, but I don’t advise you to attempt it with either one. The important accuracy issue here is not relative accuracy, but absolute accuracy. And the methods of assessing statistical significance and the like that are embedded within any of these approaches do not address the critique I am raising, because the critique is that the assumptions of the analysis itself are plausibly very wrong.

Have I then set up a nihilistic position that we can never know anything tolerably well because I can just keep raising these points that might matter, but are not included in the model? In effect, have I put any analyst in the impossible position of proving a negative? Not really. Here’s how you measure the accuracy of a model like this without accepting its internal assumptions: use it to make predictions for future real world experiments, and then see if its predictions are right or not. The formal name for this is falsification testing. This is what’s lacking in all of the referenced arguments in support of these models.

Human capital models, fixed effects models, and other various pattern-finding analyses are useful to help build theories, but a metaphysical debate about the “worth” of various public versus private sector jobs based upon them is fundamentally unproductive. For one thing, it won’t ever end. And as Megan McArdle correctly put it, the practical question in front of us is whether we the taxpayers can procure the public work that we want at a lower cost (or more generally, though less euphoniously, whether we are at the practical optimum on the cost-quality trade-off). If you want an analytical answer to this question, here is what I would do: randomly select some jurisdictions, job classifications or other subsets of public workers, cut their compensation, and then see if we can observe a material reduction in net value of output in these areas versus the control areas. If not, cut deeper. And keep cutting deeper, until we find our indifference point.

There would be obvious limitations to this approach. First, generalizing the results of initial experiments is not straightforward. Second evaluating output is not straightforward for many areas of government. But at a minimum, and unlike the world of endlessly dueling regressions, this would at least let us see the real-world effects of various public compensation levels first-hand, and allow the public to make an informed decision about whether they prefer the net effect of a change to public sector compensation or not.

(Cross-posted to The Corner)

Newer articles ↑

Older articles ↓