Re: The “Liberal Gene”
Jonah, Goldberg and John Derbyshire are having an interesting exchange at The Corner on whether researchers have a found a gene that, under certain environmental conditions, predisposes individuals to liberal politics.
I wrote a long piece (gated) for National Review in 2008 that described why we should be very skeptical of assertions of causality that are derived from gene association studies. The basic reason is that, while these kinds of studies have remarkable rhetorical force because their purported subject is biology, if you look under the skin at the bones of the analysis, the core method is traditional social science. The article under consideration is an almost perfect illustration of this.
Start with the point that the press release (literally titled “Researchers liberate a ‘liberal gene’”) is basically worthless, as it so frequently and carelessly elides the difference between claims of “association” (i.e., correlation) and claims of causality.
The basic methodology employed in the real paper starts by arguing that prior research has led to a theory that a specific gene ought to be implicated in a specific behavior. In this case, the hypothesized behavior is that people with a specific gene variant that is believed to predispose individuals to seek out new experiences should be more liberal if they are also embedded in a social network with a broad variety of viewpoints. The point of the study is to “test” this hypothesis by, roughly speaking, looking at a group of people who have the gene variant to see if there is an “association” (there’s that word again) between the number of friends in adolescence and likelihood of being liberal, and then to compare this degree of association to that found among a group of people without the gene variant. They discover that for the group with the gene variant, there is a meaningful association, but that there is not for those without it.
The big problem, of course, is that other things might also vary between the groups, and these other differences might be the real cause of the observed behavior difference. Here’s how I put this in the piece from the magazine:
Media outlets will often speak loosely of things such as a “happiness gene,” a “gay gene,” or a “smart gene.” The state-of-the-art method for finding such a link is something called a “genome-wide association study” (GWAS). In a GWAS, scientists use blood or saliva samples to sequence the DNA for a group of several thousand people who exhibit a trait or behavior of interest (the “case group”), and for a second group of several thousand who do not exhibit the trait or behavior (the “control group”). …
A second limitation of a GWAS is that it detects association rather than causation. Suppose we found that a case group of persons suffering from a disease had a greater incidence of some gene than did a control group, but that we failed to notice that the case group was disproportionately of Chinese ancestry. Culturally transmitted behaviors in the case group might be responsible for the disease, even if these behaviors had nothing to do with the gene in question. That is, the gene could be nothing more than a marker for Chinese ancestry, and hence for participation in behaviors that cause the disease. Geneticists call this problem “stratification,” and deal with it by carefully matching individuals in the case and control groups to ensure that the groups really are comparable. The problem is that these stratification effects can be fiendishly subtle. No matter how carefully we match cases with controls, there can always be some unobserved environmental factor correlated with, but not caused by, a genetic difference between groups, and this environmental factor might be what is actually causing the disease.
The researchers are well aware of the centrality of this problem. The crucial methodological passage in the full research paper starts with this:
Genetic association studies test whether an allele or genotype occurs more frequently within a group exhibiting a particular trait than those without the trait (e.g., is the frequency of a particular allele or genotype higher among liberals than conservatives?). Because a significant association has several possible explanations, there are two main research designs employed in association studies to isolate the effect of an allele on a trait, case-control designs and family-based designs (Carey 2002). Due to potential population stratification in our sample, we chose to employ a family-based design, which eliminates the problem of population stratification by using family members, such as parents or siblings, as controls.
That is, the researchers intelligently use family members as controls to try and optimize case-control matching. But this does not come close to eliminating the problem, as the researchers then describe:
We include individuals from the same family in the analysis, and thus the observations are not independent. Therefore, we use a generalized estimating equations approach with an independent working correlation structure for the clustered errors, to estimate the model. Only siblings that have different genotypes, in this case a different number of 7R alleles, are informative for the within-family component of variance since wij equals zero otherwise. However, families that share the same genotype are also included in our analysis for improved estimation of the between-family component. We have also included controls in the model for both age and gender, as there are numerous instances of age effects in gene-environment interactions and there are sex specific genetic influences on political preferences (Hatemi, Medland, and Eaves 2009c). [Bold added]
In other words, the researchers have built the functional equivalent of a regression model, through which they believe that they have comprehensively controlled for other effects in just the way that any political science, economics or other social science researcher would have in a paper that tried to evaluate the effect of any non-genetic purported cause of such a propensity (which makes a lot of sense, as the article was actually published in The Journal of Politics).
But as I described in my piece, this means that in spite of all the white lab coat talk about alleles and so on, we should treat this with the same skepticism that we would bring to any social science regression model:
So how is a GWAS showing an association between Gene X and aggressiveness different from a social-science study showing a correlation between watching lots of violent TV and aggressiveness? Mathematically, it’s not. In both cases we start by measuring aggressiveness for each person. We then compile for each person a list of data providing information on potential causes of aggressiveness: in one case genomic information, and in the other, sociological observations on childhood experiences, school quality, and so on. In the first case we observe that aggressive people have a higher incidence of Gene X; in the second that they watch a lot of violent TV. The reliability of GWAS studies is thus subject to the same limitations that we think of in connection with sociology or economics (as opposed to, say, chemistry). The only way around this — the only way to attain the precision of chemistry — would be actually to show the chain of biochemical processes by which a set of named genes creates the observable brain functions collectively defined as “aggressiveness.” Of course, if we could do that, we would have no need for a GWAS study.
The claims of causality that arise from such studies should accordingly be treated with the appropriately intense skepticism that we apply to sociological or econometric studies. In the middle of the 20th century, Friedrich Hayek and the libertarians he inspired faced those who asserted that that an economy could be successfully planned. The libertarian position was not that such planning could be proved impossible in theory, but that we lacked sufficient information and processing power to accomplish it. The world of economic interaction is so complex that it overwhelms our ability to render it predictable; hence the need for markets to set prices. This is the same analytical problem we face when trying to predict a mental state that depends upon a large number of genes. It is unclear whether we will ever understand how this complicated machinery and its interactions with the environment come together to create characteristics of mind. It is certain, however, that we do not have such an understanding now, and that we won’t know such a project is achievable until we achieve it.
(Cross-posted to The Corner)
Science has to make do with the kind of knowledge it can get. Sometimes we can do controlled randomized experiments: sometimes we need to rely on statistical analysis of correlations. That’s true in both social and natural science.
In the end, it’s a matter of taste. You are right to caution against pretending we know more than we do. But if scientists historically waited for the kind of methodological perfection that you require of social science, we wouldn’t ever had learned anything.
What you say about genes for liberal political beliefs is also true about genes for finch beak size. We are a long way from understanding the mechanisms.
As for Hayek, he was certainly willing to spin out elaborate theories about things he knew very little about — like the development of the common law, and he made bizarre and utopina proposals for constitutional change. He was just plain wrong in asserting that social democracy inevitably leads to totalitarianism, and he was just plain wrong about money. Friedman is a better hero.
— Pithlord · Oct 29, 04:57 PM · #
Thanks to the always sensible and insightful Pithlord.
As I pointed out in the past, the social sciences actually have a pretty good record of making accurate predictions about behavior on average. They don’t have a good record of making predictions about behavior on the margin, for reasons that should be obvious. For example, I predict that one year from now, the market capitalization of Apple will be greater than the market capitalization of GM. On the other hand, I have no clue which would, on the margin, be a better investment.
Similarly, I can predict with a high level of confidence that one year or ten years from now, the average test scores in the Beverly Hills Public Schools will be higher than the average in the Compton public schools. How to make marginal changes that narrow the gap (other than to worsen BH performance), however, is much more of a challenge to social sciences.
But then, there are plenty of unmet challenges by the hard sciences, as well: were is my faster-than-light warp drive, my time machine, and my anti-gravity spaceship? The simplest explanation for the repeated failure of social scientists to equalize test scores in Beverly Hills and Compton is roughly the same as the simplest explanation for the repeated failure of physicists to invent a perpetual motion machine.
— Steve Sailer · Oct 29, 06:47 PM · #
Yeah, the stuff on the margins of knowledge is just more interesting than the stuff we have a real handle on. You can get into big arguments about how much hotter 2050 will be than 2010, but no one doubts next July Winnipeg will be hotter than next December.
— Pithlord · Oct 29, 09:45 PM · #
Look, I don’t have any right to act as an unbiased party here; I’m currently getting credentialed in the social sciences. But, unsurprisingly, I’m with Pithlord and Steve.
As Steve says, much of human behavior is eminently predictable and categorizable, with high degrees of statistical validity and reliability. Much of it is obvious, true. But the fact that what is predictable is often predictably obvious doesn’t make the degree to which it is accurately predicted any less impressive. (A good analogy is what David Albert points out about string theory: the fact that string theory’s prediction of gravity is somewhat moot because gravity was a well understood phenomenon decades before the advent of string theory doesn’t change the fact that string theory is, in fact, spectacularly successful at predicting gravity.) The important thing is that what is most accurately predicted can, with great effort and careful scholarship, be used to explore more complex phenomena.
So let me just consider my particular interests for a moment. One of my jams is vocabulary acquisition and vocabulary usage. I’m particularly interested in scales of vocabulary knowledge, the fact that knowing or not knowing a particular word isn’t a binary but rather a spectrum of understanding, familiarity and comfort, and along more than one access, for words with secondary and tertiary definitions. The way to consider this (in my opinion) is with the challenge mechanism; in other words, part of understanding a given subject’s handle on a word is understanding the method that you’ve used to assess that subject’s knowledge. It’s not exactly practical to ask someone to write out every word he or she knows. So you can do assess the easy way, which is challenge based— give the subject the word you are interested in and ask him or her to define it. The other way, and for many purposes the more useful (but far more elliptical) way, is context based; analyze a subject’s writing for words that you are interested in testing for and assess whether the subject has used the word correctly denotatively, assess the usage on a stylistic/contextual/awkwardness scale, and check whether the definition is the primary definition or secondary or obscure, etc.
Now, I can tell you some things (although I certainly wouldn’t in writing with my full name attached) with a great deal of accuracy that you will likely find banal and a little bit useless: almost everyone who uses a word with high fidelity in context can offer an accurate definition when challenged. Those who use a larger vocabulary contextually have larger challenge-based vocabularies than those who use less words contextually. And the ability to provide a reasonably accurate definition of a word when challenged is far less demonstrative of the ability to use the word accurately in context than the other way around.
Of course, this stuff isn’t really my primary interest. My primary interest is in acquisition, and particularly in which assessment methods are more effective as adjunct to teaching methods. In other words, I’m skeptical about the ability of challenge-based vocabulary training to consistently lead to expanded contextually useful vocabularies. I’d like to assess that empirically. Of course, I’d also like to work it the other way around— to assess contextual acquisition from reading in contrast with list-based (challenge-based) acquisition. And I think it’s relevant, as hundreds of thousands of students get weekly vocab lists and quizzes on those lists.
In practice, assessing the differences is really hard. It’s difficult to impossible to control for where students have been exposed to vocabulary. Defining what counts for adequate contextual exposure is difficult. There are any number of confounding variables. This may be unfair, but by the standards Jim is interested in I’m not sure I could ever undertake the research. (Of course, there are commenters around here who would say what I’m interested in is obviously useless and that I shouldn’t try.)
But my standards are a bit different. My question is, what’s the use? If I can take what I’ve learned with my research and use it to make adjustments to pedagogy, and if those adjustments themselves can be said, with some reasonable strength, to be empirically beneficial to student outcomes… that’s enough, for me. And I’m ready and will for the chain of causality that leads my research to eventually improved student outcomes to be long, winding, and not entirely clear. I don’t need perfect; I just need to contribute.
— Freddie · Oct 29, 10:10 PM · #
mr. manzi,
your comments sensibly enough emphasized spuriousness. a perhaps equally important problem might be publication bias. there are a lot of genes and a lot of personality traits — throw enough random noise together and you’ll find some p<.05.
similarly, you might like this article
http://www.jeremyfreese.com/docs/Freese-AJS-GeneticsAndSocialScienceExplanation.pdf
— gabriel · Oct 29, 10:15 PM · #
To cast aspersions over the entire toolbox of observational studies (confounding is always a challenge, maybe there’s selection bias!) isn’t to actually engage the science. It’s convenient to dismiss controlled associations as mere artifacts or biases, but you need to actually address what, specifically, could have led to those biases. You spent 500 words explaining that correlation does not imply causation. I think your readers are better than that.
RA Fisher, one of the great statisticians, argued for years after there was significant population evidence (but still very little biochemical data) that the association between smoking and cancer was a case of “arguing from correlation to causation. [RA Fisher]
Observational studies, including those using genetic exposures, can approximate experimental studies. And we should be skeptical, but not so skeptical that we refuse to engage the science on the grounds that the methodology is irredeemably flawed.
— Ryan · Oct 29, 11:05 PM · #
Now I’ve only scraped together a master’s in history and haven’t started my PhD in philosophy (I’m one of those silly liberal arts kids). So take what I say with whatever grain of salt you’d like. But there are two points I’d like to make. One is if people are as predictable as claimed then historians would have figured out the future rather well by now. I think the failure of the Annales School to do just that might indicate people are a bit harder to figure out than we think. Since we don’t have perfect knowledge (the clearing of all clearings as Heidegger would say) then it’s a bit rich to make such a claim of predictability.
My second point is a defense of Manzi wasting 500 words and his readers time. David Hume spent considerably more words arguing that correlation is not the same as causation. Now not to disparage Mr. Manzi’s intelligence, but I doubt that his mind is equal to that of Hume. So if Mr. Manzi is able to see the same flaw as Hume then I think there’s something to the whole “correlation is not causation” thing. Yeah, correlated events can have a causal link. But you can’t jump from correlation to causation without explaining the causal link. I believe that’s what is called a theory. And, if given the same conditions, you can’t repeat the results then the theory doesn’t work. Sure, it might have it’s uses in limited ways just as Newton’s theories are still useful on the everyday scale. But you still have to recognize that theory’s limitations. If it gets you closer to a better theory then it’s served a purpose.
In this case if enough studies are done that show a strong correlation between a certain gene and liberal perspectives then start working on an explanation for said correlation. Hopefully you can get enough studies done before the definition of liberal changes, again.
— Jordan · Oct 30, 12:56 AM · #
Ryan sez: “Observational studies, including those using genetic exposures, can approximate experimental studies.”
Over the years I’ve said much to try to overcome the disparagement of observational studies that you tend to get from Deweyite-numbed minds in our educational colleges. So I’d like to agree with what you’re saying here. But I have no idea what you mean by observational studies “approximating” experimental ones. I don’t see how that could be. Could you explain?
— The Reticulator · Oct 30, 04:03 AM · #
Pithlord,
Thanks.
We have to make do with the knowledge that we have, but this doesn’t make it science.
I don’t demand perfection, and I find this research interesting. I have consistently argued against using these kinds of findings as the justification for counter-intuitive policy interventions.
Hayek (like all of us) was wrong about a lot; but I think he was right about the knowledge problem.
— Jim Manzi · Oct 30, 10:11 AM · #
The problem may be that we have a monolithic conception of “science”. It’s a matter of preference, but I’d rather emphasize that science comes with varying levels of certainty and methodological rigour than argue about what counts as science as such.
I certainly agree that none of this “liberal gene” stuff has any policy implications in a democracy.
I don’t mind acknowledging Hayek had a powerful argument against a centrally planned economy, although I doubt it has any contemporary political relevance since no one even remotely politically important disputes the argument any more. The problem is with ideological Hayekians.
— Pithlord · Oct 30, 01:40 PM · #
Look, science is a series of practices. That’s the sort of statement that drives a certain strata of people crazy, but you don’t have to get into deep, fruitless debates about epistemology to acknowledge that it’s banally true that some human enterprise called “science” operates that does not conform to almost any of the stringent definitions of science or the scientific method. And that practice, however displeasing to those who want science to occupy some incredibly narrow range of definitional fidelity, contributes to human flourishing.
— Freddie · Oct 30, 02:20 PM · #
Popper was eventually driven to denying astronomy and biology were sciences at all.
— Pithlord · Oct 30, 03:11 PM · #
Jim:
I don’t mean to derail your thread, but I was hoping you would respond to this statement by Pithlord:
I’m not sure I understand that statement in light of this. If central planning is so passe, why is there so much of it?
— jd · Oct 30, 09:53 PM · #
Hayek’s valid argument was that the ubiquity of implicit knowledge made planning a whole society impossible. He accepted that organizations within a society could be centrally directed at least to some extent.
The existence of a Ministry of Health or of central statistics is not in contradiction to the part of Hayek’s argument everyone accepts.
Whether this particular bureaucracy is too big is a question reasonable people can disagree about, but it’s defenders are not committed to central planning in the sense of 1930s communists and socialists.
— Pithlord · Oct 30, 11:09 PM · #
How does it matter if they’re not committed to it in the sense of 1930s progressives? The feds are centrally-planning one-sixth of the US economy—an economy that the 30s commies couldn’t have imagined and one that the Canadians and British are…well, less than a sixth. Seriously, do you not believe there is “central planning” going on?
— jd · Oct 31, 12:01 AM · #
Hayek in “The Use of Knowledge in Society”:
“This is not a dispute about whether planning is to be done or not. It is a dispute as to whether planning is to be done centrally, by one authority for the whole economic system, or is to be divided among many individuals. Planning in the specific sense in which the term is used in contemporary controversy necessarily means central planning—direction of the whole economic system according to one unified plan. “
And in that sense, he won the argument.
— Pithlord · Oct 31, 12:06 AM · #
Freddie sez: “And that practice, however displeasing to those who want science to occupy some incredibly narrow range of definitional fidelity, contributes to human flourishing.”
You made good points, but I hope you will agree that this practice of science contributes to a lot more than human flourishing. It also contributes to warfare, death, to the banalization of human life, the destruction of society, the corruption of our political system, and some neat illustrations and videos to help us understand how things work.
— The Reticulator · Oct 31, 03:08 AM · #
Experiments are a wonderful thing, but how do you figure out what experiments to perform without observational analysis first?
— Steve Sailer · Oct 31, 04:32 AM · #
There is a lot of potential for controlled, randomized experiments of teaching techniques – provided the political problems can be overcome.
Similarly with the contracting out of public services, police tactics, social work interventions. Sometimes the political problems are based on a lack of understanding, sometimes on interest group politics, but sometimes on reasonable concerns about equality before the law and the state.
Given the reality of politics and ethics, experiments just are not going to be perfect, and we are always going to have to rely on observational studies.
— Pithlord · Oct 31, 02:26 PM · #
Pithlord:
Planning in the specific sense in which the term is used in contemporary controversy necessarily means central planning—direction of the whole economic system according to one unified plan. And in that sense, he won the argument.
I appreciate the fact that he won the argument. I believe he won the argument as well, and that anyone who argues that central planning leads to anywhere but disaster is ignoring history—to say nothing of in-our-face current events. So I stipulate that he won the argument. But judging by the US federal government of the last 80 years, and especially last year with the enactment of Obamacare, it appears that Hayek, Friedman, Reagan and all of us who believe in smaller government have won the argument—to no avail: we are losing the war.
— jd · Oct 31, 04:05 PM · #
You made good points, but I hope you will agree that this practice of science contributes to a lot more than human flourishing. It also contributes to warfare, death, to the banalization of human life, the destruction of society, the corruption of our political system, and some neat illustrations and videos to help us understand how things work
Very, very true.
— Freddie · Oct 31, 05:02 PM · #
happy halloween!
Samhain, matoko_chan.
— THE · Oct 31, 07:11 PM · #
Pithlord (and actually, Steve, Freddie, et al):
There are several parallel threads within what you’re saying (all of of them interesting).
I agree entirely that debates about “Is X science?” just aren’t super-useful. I have a book coming out that goes into my views on this plus Hayek, etc. in enormous, boring detail. I’ll just say on the science part that there is a class of human activity (as per Freddie’s way of putting it) that enables to make useful, reliable, non-obvious predictions about the effects of human action. This is what I mean by science (or, if you prefer, “gorp” or any other set of syllables you want). I think this is a coherent set of practices, and rleies on experimentation.
As per Steve’s comment,
Exactly (though actually, you can and often do use much more intuitive processes than analysis to decide this). Further, you must have non-experimental theory-building to apply the results of the experiment in any way to any future instances, ie, to create the prediction rules that are the coin of the realm. Of course, there forward prediction rules must then be experimentally tested – hence the endless inhalation-exhalation process (as I term it) of theory – experiment – theory and so on. And note, that, theory always does come first.
When you say,
I don;t think that’s strictly true, and in fact, in C&R, he takes an incredibly broad (in the disciplinary sense) view of fields that can be scientific, as long as they subject themselves to falsification. Note that in Popper’s view, non-experimental falsification is a useful method for progress.
I devote an entire chapter of the book to non-experimental sciences, sub-divided into historical sciences (e.g., geology or certain parts of evolutionary theory) and fields for which experiments are theoretically feasible but not practical (e.g., astrophysics).
Agree, and under the aegis of the IES, we have finally started to do a lot of them over the past 10 years. I try to review every one of them in my book.
Pithlord, you say that:
Beyond that he argued that some of them must be planned. This is Coase’s essential definition of the firm – a sub-region for which economic activity is consciously planned. Beyond this, Hayek thought that some uniform system of law represented an extremely flexible planning system for the society as a whole. Hayek’s ideas, unlike those of many of his “followers”, generally resist reduction to bumper stickers.
Finally,
Not so sure about this. Experiments are not perfect and (very) often not feasible. Yet we still have to make decisions. Check. What doesn’t follow is that “scientific” opinion (or whatever label you want to put on it) is superior to common sense, folk wisdom, “those morons in the Tea Party” or whatever label you want to put on it. Academic methods don;t only compete with one another, but with all of these other methods. I’m very skeptical (and believe I have shown in multiple chapters of a book that I have good reason to be) of the effectiveness of non-experimentally-verified “academic (I mean this descriptively, not pejoratively) methods for the purpose of predicting the effects of policy interventions. This is what Steve means (I think) by marginal effects. But these are the effects of our actions, and ultimately, that is what I want to predict to rationally guide my actions.
— Jim Manzi · Oct 31, 10:53 PM · #
Michael Vassar recently gave a GoogleTalk about the difference between Science and Scholarship and the existence of multiple forms of knowing.
— THE · Oct 31, 11:52 PM · #
I mean that observational studies can approximate experimental studies when certain assumptions are met (exchangeability, consistency, positivity, etc…). These assumptions are extremely onerous, and often unreasonable, but they’re theoretically valid. So the ideal (abstract) observational study can determine causal relationships.
The assumptions, not the methdology, are the proper point of discussion. But the assumptions require getting down and dirty in the science. So I’d like to see exactly why Manzi thinks that, to take an unethical hypothetical, genetically engineering individuals (randomly) to have or not have this gene would influence their political decisions. Some unmeasured confounder? Some sort of selection bias? This argument is difficult, but more important than whether GWAS can inform our policy decisions.
Manzi – I’m curious about your argument that observational academic research is, in general, a poor tool for guiding policy decisions. I disagree, with the caveat that the observational results are reproducible in different groups, with different methods, and plausible. Here plausibility is the weak version of your argument that the biochemical pathways be elucidated.
Maybe I’m biased (I come from the health fields), but not many academics jump excitedly from observation to recommendation. Epidemiologic studies are hedged to the extreme. But I can’t speak for those yahoos in economics or sociology. Public health behavioral research may be an exception here.
But even then I’m highly suspicious that the libertarian-conservative hypothesis that disseminated wisdom is a better guide for policy decisions.
Anyhow, the dose of skepticism is appropriate.
— Ryan · Nov 1, 12:05 AM · #
A book by Jim Manzi coming out? Cool. What is the title and will it be out for Christmas?
— y81 · Nov 1, 12:40 AM · #
I would hasten to add that, despite what people seem to be assuming here, the political impositions on empirical understandings of education cut both ways. This will not be a popular argument here, but this is just true: a lot of people are interested in evaluations of teaching quality only insofar as those evaluations make it easier to smash unions and make quality of employment worse for teachers.
Also, I have argued for years, and will continue to argue, that the use of student evaluation as a method of teacher evaluation poses many severe epistemological problems. It is really, really hard to separate signal from noise in such a scheme. But any suggestion that this is the case is immediately dismissed by people stuck in “get tough” rhetoric.
Here’s the sort of thing I’d like to see more of:
http://etscrs.submit4jobs.com/index.cfm?fuseaction=85332.viewjobdetail&CID=85332&JID=95889¬es_id=1
The listing doesn’t mention it, but this is actually an initiative of the Bill and Melinda Gates Foundation. I’ve been encouraged by the general tenor of their rhetoric on these issues, because a simple pragmatism and dedication to solutions that don’t necessarily satisfy Republican policy agendas are fairly rare in the reform world.
— Freddie · Nov 1, 12:42 AM · #
“I mean that observational studies can approximate experimental studies when certain assumptions are met (exchangeability, consistency, positivity, etc…)” (Ryan)
Ah, I think I now know what you’re talking about, and that I would agree. Thanks for clearing up that point.
— The Reticulator · Nov 1, 12:53 AM · #
y81:
Thanks. It’s not coming until next year, likely in the fall.
— Jim Manzi · Nov 1, 02:03 AM · #
1. Correlation doesn’t prove causation, but lack of correlation (if demonstrated using a variety of data and analytic methods) goes a long way towards proving lack of causation. So it’s at least useful that way. (Correlation is also useful to point to where we should be trying to discover/explain causation. Big Rs suggest potentially fruitful research avenues. At least.)
2. Re Hayek and knowledge. Fatal flaw in my opinion: it assumes that irrational decisions by humans are random, so they cancel each other out and the rational decisions predominate.
But we know that human decisions are systematically irrational. (cf. Caplan’s Myth of the Rational Voter to see this worked out quite cogently re: voting decisions.)
People who have expert knowledge about systematic human error could design policies to correct for that systematic irrationality. i.e. policies that deliver long-term to correct for humans’ short-term biases.
Yes, those decision makers are also systematically irrational, but their expert knowledge of that fact (and the details of that fact) could serve to correct, at least some, for both their cognitive failings and those of the crowd.
— Steve Roth · Nov 1, 06:30 PM · #
One problem with relying on popular (or distributed, or whatever) wisdom is that it is wisdom built up over time. It takes time to develop, perfect, and widely transmit the knowledge. This is OK in periods of long term stability, but in periods of dynamic change common wisdom can quickly go stale. Of course not everything changed in periods of dynamic change so common wisdom (or whatever) might be vaild is some cases and not in others.
— cw · Nov 1, 09:03 PM · #
Steve Roth,
I wouldn’t trust expert knowledge of the expert’s individual limitations. Our ability to self-deceive seems to increase with intelligence more than our ability to uncover our self-deceptions.
What I’m willing to put some trust in is the collective process of scientific discovery, which I believe has better devices for addressing cognitive biases than any other area of human activity. That says nothing about individual scientists, who are as human as anyone else.
— Pithlord · Nov 2, 12:13 AM · #
“I wouldn’t trust expert knowledge of the expert’s individual limitations. Our ability to self-deceive seems to increase with intelligence more than our ability to uncover our self-deceptions.”
I’ve read about studies that say people who are more incompetent people are at some task, the more they tend to overestimate their competence. I don’t know if there is a correlation between competence at a particular task and general intelligence, but I would think there would be some sort of general link.
— cw · Nov 2, 08:41 PM · #
Have I mentioned to anyone what a great proofreader I am?
Anyway, I meant to write: the more incompetent people are at some task, the more they over-estimate their competence.
— cw · Nov 2, 08:45 PM · #
Observational studies, including those using genetic exposures, can approximate experimental studies. And we should be skeptical, but not so skeptical that we refuse to engage the science on the grounds that the methodology is irredeemably flawed.
— Replica Swiss Watch · Nov 3, 06:08 AM · #