Archive | Research Quality RSS feed for this section

Debasing the Currency of Criticism

Public policy research is often methodologically mediocre.  You might think, therefore, that more methodological criticism would help!  Unfortunately, I don’t think it will be that simple. Technical critique is too often an emergency escape hatch for tired ideologues reluctant to litigate serious challenges to their principled commitments.  Paradoxically, research quality may suffer as a result.  A drained swamp lowers all boats.

Nearly every activity, including research, can be evaluated according to two types of standards: internal and external.  Internal standards define the activity engaged in, while external standards evaluate that activity in light of the purpose it serves, the values it reflects, or its likely practical effects.  For simple activities, internal standards are less controversial than external standards.  We can usually more-or-less agree about what some activity involves in order to determine whether someone is or isn’t engaged in it.  External standards are often controversial because they reflect disputed moral, political, or aesthetic values.

“Speech,” for example, is the activity of verbally expressing ideas using language.  This is an internal standard for speech, and there are various ways in which our efforts to speak may fall short of meeting it.  An infant who cries for a bottle expresses herself verbally, but because she does not know how to articulate words, her cries fail to qualify as “speech.”  If you have laryngitis, you may form words, but if no sounds result, you, too, are failing to speak.  If my mouth is numb following a dental procedure, I may emit sounds but have difficulty forming words.  You might say that I am “speaking badly” if my mumbling efforts are partially understandable.  In the context of an internal standard, “bad” speech lies on a continuum between what is inarguably speech and what fails to be speech at all.

But there is another, completely different way in which speech can be “bad.”  Speech is often called “bad” if it is spiteful or inaccurate.  Speech can also be bad in the sense that it promotes harmful behaviors or unjust policies.  These are all external standards for speech.  To raise the question of whether or not some particular speech is bad by some external standard is to raise philosophical questions about the validity and importance of that external standard as well as empirical questions about how the speech in question measures up.  Speech can be very, very bad by an external standard while being good by the internal standards of speech.  It makes perfect sense to say that Mussolini gave “bad speeches,” while at the same time conceding that he “spoke very well.”

Research of all types can be evaluated by both internal and external standards.  Ideological commitments are external standards for the wonkish activity of public policy research.  They should therefore function as figurative book-ends: as a basis for evaluating the worthiness of a particular research question, and as a basis for recommendations in light of a research result.  In between, a wonk must adhere to the internal methodological standards applicable to public policy research in order to do her job well.

Because research is a complicated activity, its internal standards are disputed among experts and poorly understood by the public.  Most public policy research combines ideologically-charged subject matter with contestable methodological choices.  The result is a grave moral hazard:  critics with fundamentally ideological complaints can till once more the exhausted soil of first principles…or, they can apply their efforts to the more pliant field of methodological critique!

Denying the validity of inconvenient research frees advocates from the challenging task of accommodating inconvenient facts.  As Economist blogger Will Wilkinson explains, this rhetorical strategy is the chattering class’ current Nash equilibrium:

Perhaps it’s wishful on my part to think, as I do, that most economically literate observers really do understand that raising the minimum wage will screw up the prospects of a fair number of poor young workers.  Those who favour raising the minimum wage anyway just think that, all things considered, that’s a price we ought to be willing to pay. But they can’t say that, just as second-amendment enthusiasts can’t say that an occasional grim harvest of kindergartners is a price we ought to be willing to pay for the freedom to own guns.

Charges of ineptitude have become so pervasive that the research signal gets lost in the media noise.  Every study bearing on a controversial issue, if it is reported on at all, will be criticized by ideological hostiles as a failure.  The resulting foxhole solidarity discourages wonks from calling out fellow travelers’ weakest work.  Methodological criticisms are roughly equally effective regardless of their merit, because few journalists, pols, and voters can evaluate their strength, and even fewer actually will.

Knowledgeable wonks and writers should identify genuinely weak research, but we should be equally ready to praise methodologically sound work even when its findings and prescriptions are unwelcome.  When ostensible technical fault-finding is actually sublimated ideological frustration, it threatens to reduce research quality at think tanks by debasing the value of technical criticism.  Good research is expensive and difficult to produce, while the bad stuff is relatively cheap and easy.  If no one signals the objective difference, bad research will inevitably crowd out the good.  Perhaps a consortium devoted to recognizing the best research at each participating think tank could begin to realign incentives towards producing high-quality work.

Ideology, Partisanship, and Scholarship

Think tanks are often ideological and sometimes partisan, but critics who deploy these charges interchangeably miss a distinction that makes a difference.  “Once seen as non-ideological ‘universities without students,’ the American think tank has, in many cases, become a partisan stalking horse,” Pacific Standard’s Emily Badger complained.  On the relationship between partisanship and ideology, she averred:  ”Those two terms…have become increasingly synonymous in modern politics.”  But while partisanship and ideology can each create conflicts of interest for think tank scholars, the two pressures are distinct, and they often conflict with each other.  Only by separating one from the other can we think carefully about how to insulate think tank research from influences that will diminish its quality.

When we refer to a person’s “ideology,” we are usually talking about a stable set of somewhat general (but not maximally general) moral and political principles. So understood, ideological principles serve as normative guideposts for, well, nearly everyone who ventures opinions about public policy. Indeed, even some fully-fledged political philosophers see value in these commitments. John Rawls called them “considered convictions,” and he believed that they play a legitimate role in philosophical deliberation. Elected representatives, opinion journalists, policy wonks at explicitly ideological institutions, grass roots advocates, and family members who discuss current events over the dinner table invoke their ideological commitments as a sort of magnetic north when they explore public policy ideas.

Studies suggest that adults have fairly stable ideologies, but they can evolve over time.  Some people experience a dramatic ideological conversion once or twice in their lives, either spurred by philosophical or religious study or as a result of coming to grips with facts about the world that cast doubt on the validity of their formerly-held principles.

Academics, traditional reporters, nonpartisan government analysts, and other political participants with highly technocratic orientations sometimes purport to eschew “ideology.” But as The Atlantic’s Conor Friedersdorf recently pointed out, technocrats need ideologies, too! Ideology is what analysts consult in order to decide which research questions are socially important and which are unworthy of exploration. As former Cato Institute research fellow Will Wilkinson explained:

There’s no avoiding the fact that, if you’re doing anything with policy at all, you’re trying to achieve some goal. If you think that the goal is one that’s worth having, you have to have some rational justification for why that’s the end that we ought to be aiming at.

Even those whose research agendas are set by others—lowly research assistants, OMB economists, and Iranian nuclear scientists come to mind—must at least assure themselves that their analytical skills are not furthering some evil purpose. To answer the question, “Is my job morally acceptable, or must I quit?” a researcher must consult her ideology. When academics undertake projects for the purpose of “advancing the science” of their discipline by refining its methods, they must ask and answer the same question.

“Partisanship” is a different thing. Political partisanship has to do with being a team player in a coalition that seeks electoral victory. To be “partisan,” therefore, is to speak and act for the purpose of advancing the electoral prospects of one’s party. Ideological convictions frequently determine the political party with which a person chooses to make common cause. This is a always choice of lesser evil, though, because ideologies can be as unique as snowflakes, while political parties are coalitions. A religious conservative does not share an ideology with a libertarian, nor does a labor democrat share an ideology with a deep ecologist.

Inevitably, the demands of partisanship will conflict with the demands of ideology. In 2003, for example, small government conservatives at the Heritage Foundation had to decide whether or not they would support President George W. Bush’s creation of Medicare Part D—a huge and unfunded new federal entitlement program. In that case, the think tank’s ideological commitments rather impressively prevailed over the pressures of partisanship. On the other hand, Heritage’s Obama-era about-face on the ideological acceptability of individual health insurance mandates raised questions about whether partisanship does not sometimes masquerade as ideological purity.

Both ideology and partisanship can create conflicts of interest when a public policy scholar hopes, for partisan or ideological reasons, that her analysis will have a particular outcome. Suppose that an analyst at Think Tank A is ideologically committed to a principle of noninterference with the internal conflicts of foreign nations. She may examine the effect of interventionist policies on the federal budget in the hope that her findings will persuade people concerned about the deficit to support a less interventionist foreign policy. If she discovers that foreign policy has little or no effect on the deficit, she will be disappointed.  Similarly, a partisan may hope to vindicate the education reforms of a particular president during re-election season. If he discovers that the reforms actually did not benefit students, he will be disappointed by his findings as well.

This kind of conflict does not bedevil wonks alone. Traditional academics also hope that their research will reflect favorably on their ideological or partisan commitments. Princeton political scientist Martin Gilens, whose personal views are liberal, recently published a book in which he reports that George W. Bush’s policies reflected the preferences of poor and working class voters far more accurately than did those of Presidents Clinton or Johnson.  Gilens jokingly described his surprise and chagrin:

Certainly for Bush 43—George W. Bush—I would not have expected high levels of responsiveness to anybody except maybe the most affluent.  Like a good political scientist, when I found these results, I said there must be some mistake in the coding.  I did everything I could, you know, to make them go away.  But they were very persistent.

Ideology is not wholly dispensable for public policy scholars, regardless of where they work. Therefore, ideological conflicts of interest cannot always be sidestepped.  Because good research is a lot like good driving, such conflicts may negatively impact research quality despite a scholar’s honest best efforts.  Ideological conflicts should therefore be acknowledged and carefully managed by both scholars and institutions in a way that protects research quality.  At explicitly ideologically committed institutions, I believe that this challenge is among the biggest challenges in think tank ethics.

A scholar’s partisan allegiances, if any, have no necessary role to play in her research process. This is not to say that a passionate partisan is capable of entirely ignoring the partisan implications of her research, but it seems like an unambiguously worthy goal. Indeed, nearly all U.S. think tanks make public claims of nonpartisanship. These commitments are encouraged by the tax code, but when and where they are taken seriously, they promote good research practices by minimizing an unnecessary source of bias. Indeed, it seems to me that nonpartisanship is especially valuable to think tanks with ideological missions. It is challenging, but I believe possible, to maintain good research standards in an ideologically committed institution. Partisanship simultaneously undermines both organizational goals.

Why Conflicts of Interest Matter

Conflicts of interest matter in the think tank world, because good research is a lot like good driving. If you are like most people, you make an honest effort to avoid getting into car accidents. You look both ways at stop signs, check your blind spot before you change lanes, and drive more slowly in rain or snow. You probably don’t drive drunk, and perhaps you, like me, no longer answer the phone behind the wheel.

Still, you don’t drive as carefully as you might, and your auto insurance policy is partly to blame. “But an accident is a dangerous, inconvenient, and humiliating event regardless of insurance,” you may protest. That’s a fair point. Almost no one who buys insurance thinks: “Awesome. Now I can get into all the car wrecks I want!” Yet, this fact—that well-insured drivers are, all else equal, more likely than others to be involved in serious auto accidents—has been proven about as well as anything can be using large-scale data sets.

Economists refer to this as a “moral hazard” effect—a term that misleadingly implies that an adequate exercise of will power can always overcome it. But that isn’t entirely true when it comes to driving, because driving well isn’t a single decision that we make by deliberately reflecting on all of the reasons that we have. Instead, good driving is an accumulation of a million tiny choices, many of which are habitual or semi-conscious. We don’t fully notice how incentives affect this complex pattern of conduct because we can’t identify every distinct decision point at which they come into play. If we tried, we’d never get out of the driveway.

Incentives influence think tank research in exactly the same way.  Every step of the research process involves thousands of tiny decisions, from the formulation of Boolean searches, to the choice of some variable’s functional form, to the book you didn’t finish because someone asked you to join them for lunch. No matter how conscientiously think tank scholars strive to do excellent research, a conflict of interest makes it likely that, in myriad tiny ways, they are doing their jobs less well. As my former colleague, Tim Lee, once wrote, “Getting the right answer is hard, and you are just less likely to do it if you have a huge financial incentive not to.”

It doesn’t follow that no conflicted research should be done, any more than the moral hazard effects of auto insurance make it the case that we shouldn’t carry any auto insurance. I carry generous auto insurance because I think that avoiding financial ruin and being able to fully compensate others for any mistakes I make behind the wheel are more important, all things considered, than a slight, subconscious reduction in my driving quality. I just believe that think tanks ought to apply that same kind of cost-benefit scrutiny to their policies concerning conflicts of interest.

When I ask think tank executives how they insulate their scholars from incentives that could undermine good research practices, they mention their “bully pulpit” strategy, their think tank’s “value proposition,” or their scholars’ “inner discipline.” Most think tank executives work admirably hard to establish a culture of honesty and excellence. In my experience, and to their credit, think tank executives and scholars usually do “the right thing” when they are faced with a clear moral dilemma. Good cultural values can make light work of the decisions that we actually notice and think about.

But culture alone isn’t enough, because conflicts of interest can subtly bias research in ways that scholars aren’t fully aware of and therefore can’t fully overcome. How can think tanks nudge their scholars in the direction of excellence? It seems to me that think tanks should steer clear of funding arrangements that create conflicts of interest if alternative arrangements are possible, and that they should establish personnel policies that protect the careers of scholars who generate disappointing answers to empirical questions. That few think tanks have established clear policies on such subjects suggests to me that they rely too exclusively on individual integrity, when they should also pay attention to the power of incentives to promote, or undermine, good research practices.

Straw Poll Fallacy

Good think tanks do research, and they also do advocacy, but think tanks that fail to make any distinction between the two squander valuable reputational capital.

Last Friday, my former MI colleague, Josh Barro, scolded the Florida-based James Madison Institute for conducting a “push poll” about the state’s federally-subsidized Medicaid expansion plans.  “This isn’t a poll designed to figure out how Floridians feel about the Medicaid expansion,” Barro complained, “it’s one designed to get them to say they oppose it, so the organization commissioning the poll can say it’s unpopular.”

Cato Institute health policy guru Michael Cannon, also a former colleague of mine, had apparently reviewed the poll questions for the James Madison Institute before the poll hit the field.  Cannon fired back:

Medicaid expansion is not a benefits-only proposition. When a poll only asks voters about benefits, the results are meaningless. Yet to my knowledge, JMI’s poll is so far the only poll that has asked voters about both costs and benefits. All other polls—for example, the hospital-industry poll Barro cites—ask only about benefits, as if the costs don’t exist or shouldn’t influence voters’ evaluation of the expansion. Those polls are “push” polls, while JMI’s poll is the only honest poll in the field.

I consulted an experienced GOP-leaning political pollster in the Washington, DC area to get the skinny.  The pollster, responding on condition of anonymity, expressed “serious concerns about the poll.” To wit:

First, it’s not a true survey of registered voters, because they focus mostly on pulling from registration lists those who voted in at least two of the last four elections. You can’t say that’s representative of Florida registered voters, though you could say its representative of likely voters. That’s a distinction that should be made clear, as it will bake in a slight right-leaning skew compared to straight-up registered voters.

I stopped reading and started writing this email when I hit that first debt question. A good poll would have asked a more “clean read” without loading up a big message before the ask about how important the debt is. The interviewer says “well everyone else cares about the debt, so, how concerned are you?” Really not good. This is the kind of question you push further down in the questionnaire as a message test, not as a legitimate gauge of concern about debt.

Then I got to the question [posed as] “some say we need reform” vs. “some say we need to preserve a government program.” How [often do] Democrats actually say, “we must preserve a government program!” Never. They say, “we must preserve needed health services for our poorest citizens,” etc. A good poll puts our best message against their best message. Already, the poll is putting up a weak version of the opposition’s position.

The point thus goes to Barro, though I’m sympathetic to Cannon, who is not a pollster and was only asked to review these questions for the accuracy of their substantive claims about Medicaid.

Such bad methods reflect poorly on the James Madison Institute, which holds itself out to be a research and educational organization, complete with a Research Advisory Council primarily composed of university-based social scientists. Think tank research isn’t expected to be peer-reviewed academic journal fodder, but it usually aspires to inform the public policy debate by telling us something new about the world we live in.  Think tank findings are often presented in light of researchers’ prior ideological commitments, but they should not merely be talking points in support of predetermined conclusions.

Not surprisingly, a James Madison Institute press release reveals that a division of the Florida-based public relations* firm Cherry Communications conducted the Medicaid expansion poll under contract.  “[While] a polling firm’s first goal is to create situational awareness,” the DC-area pollster explained, “a PR firm’s first goal is to create good headlines. These are each valuable but are not the same thing.”  Nor are they necessarily mutually exclusive:

There are really two different ways to approach designing a poll. One is if you want an accurate read on public opinion to guide strategic decision making. The other is to “message test” and to figure out how best to move opinion and build a communications plan. You can do both in one survey as long the “clean read” part comes first.

The fundamental problem here is that this poll was conducted with public release in mind and to show right off the bat that conservative messages on the issue work. This is a PR firm’s goal clearly. There’s no time taken to get the clean read.

The James Madison Institute hasn’t yet responded to my request for comment, but it isn’t hard to surmise what happened here: the communications department probably commissioned a poll as a way to get airtime for the Institute’s message on Medicaid expansion.  But a poll isn’t just a message.  A poll is a social scientific method, which why a lousy poll from a think tank casts doubt on the quality of its other research.

Journalists and policymakers afford more weight to think tank research than they do to press releases from PR firms because think tanks aren’t supposed to just spin.  The James Madison Institute may have rationalized this survey as the digital equivalent of liquid courage for skittish pols, but it should worry instead about what techniques like these suggest about its institutional values.  Reputation matters, because media and government consumers often don’t have the time or expertise to independently assess the quality of every report.  I am less likely now than I would have been last week to take anything in the James Madison Institute’s new policy brief on Medicaid expansion at face value, because I have reason to question the organization’s commitment to good research methods.

UPDATE:  I have just been informed that the Cherry Communications website I linked above belongs to a different firm with the same name as the “Cherry Communications” referenced in the James Madison Institute press release, whose division, “Public Insight,” conducted the Medicaid expansion poll.  I have eliminated the incorrect link, and I apologize to both firms for the error.

The James Madison Institute has offered some comments regarding the poll, and Jim Cherry of the Cherry Communications whose division conducted the Medicaid expansion poll has offered to comment also, so stay tuned for a follow-up post.

*Cherry Communications is really better characterized as a Republican political consulting firm specializing in phone-based services such as voter identification calls, persuasion/advocacy calls, get-out-the-vote calls, surveys, and polls.  It’s website is here.