Why Conflicts of Interest Matter

Conflicts of interest matter in the think tank world, because good research is a lot like good driving. If you are like most people, you make an honest effort to avoid getting into car accidents. You look both ways at stop signs, check your blind spot before you change lanes, and drive more slowly in rain or snow. You probably don’t drive drunk, and perhaps you, like me, no longer answer the phone behind the wheel.

Still, you don’t drive as carefully as you might, and your auto insurance policy is partly to blame. “But an accident is a dangerous, inconvenient, and humiliating event regardless of insurance,” you may protest. That’s a fair point. Almost no one who buys insurance thinks: “Awesome. Now I can get into all the car wrecks I want!” Yet, this fact—that well-insured drivers are, all else equal, more likely than others to be involved in serious auto accidents—has been proven about as well as anything can be using large-scale data sets.

Economists refer to this as a “moral hazard” effect—a term that misleadingly implies that an adequate exercise of will power can always overcome it. But that isn’t entirely true when it comes to driving, because driving well isn’t a single decision that we make by deliberately reflecting on all of the reasons that we have. Instead, good driving is an accumulation of a million tiny choices, many of which are habitual or semi-conscious. We don’t fully notice how incentives affect this complex pattern of conduct because we can’t identify every distinct decision point at which they come into play. If we tried, we’d never get out of the driveway.

Incentives influence think tank research in exactly the same way.  Every step of the research process involves thousands of tiny decisions, from the formulation of Boolean searches, to the choice of some variable’s functional form, to the book you didn’t finish because someone asked you to join them for lunch. No matter how conscientiously think tank scholars strive to do excellent research, a conflict of interest makes it likely that, in myriad tiny ways, they are doing their jobs less well. As my former colleague, Tim Lee, once wrote, “Getting the right answer is hard, and you are just less likely to do it if you have a huge financial incentive not to.”

It doesn’t follow that no conflicted research should be done, any more than the moral hazard effects of auto insurance make it the case that we shouldn’t carry any auto insurance. I carry generous auto insurance because I think that avoiding financial ruin and being able to fully compensate others for any mistakes I make behind the wheel are more important, all things considered, than a slight, subconscious reduction in my driving quality. I just believe that think tanks ought to apply that same kind of cost-benefit scrutiny to their policies concerning conflicts of interest.

When I ask think tank executives how they insulate their scholars from incentives that could undermine good research practices, they mention their “bully pulpit” strategy, their think tank’s “value proposition,” or their scholars’ “inner discipline.” Most think tank executives work admirably hard to establish a culture of honesty and excellence. In my experience, and to their credit, think tank executives and scholars usually do “the right thing” when they are faced with a clear moral dilemma. Good cultural values can make light work of the decisions that we actually notice and think about.

But culture alone isn’t enough, because conflicts of interest can subtly bias research in ways that scholars aren’t fully aware of and therefore can’t fully overcome. How can think tanks nudge their scholars in the direction of excellence? It seems to me that think tanks should steer clear of funding arrangements that create conflicts of interest if alternative arrangements are possible, and that they should establish personnel policies that protect the careers of scholars who generate disappointing answers to empirical questions. That few think tanks have established clear policies on such subjects suggests to me that they rely too exclusively on individual integrity, when they should also pay attention to the power of incentives to promote, or undermine, good research practices.

One Response to Why Conflicts of Interest Matter

  1. J Self March 14, 2013 at 6:00 pm #

    The problem with the car accident example above is the tiny phrase “all else equal”; because all else is not equal. The data needs more careful parsing. In addition to the number of accidents one has to consider what the insurance industry calls severity, i.e. whether an accident is a fender bender or a tragedy. Careful analysis validates the common sense idea that the worst drivers are those with nothing to lose. Here you can insert young males with inexpensive cars, few other assets and little or no insurance. One reason a driver involved in a tragic accident is not involved in another is because that person is now dead.