There’s been more activity on the question of the importance of RCTs. Last week, Bill Easterly wrote his thoughts about randomized controlled trials (RCTs) on his blog, and Chris Blattman posted a response. Both of them seem to take the perspective that academic research should produce new theory and/or create major policy recommendations. The fact that each RCT often only answers a “small” question – and sometimes only in a certain context – seems to them to be a huge drawback. I disagree.
I work as a PA in La Paz, and I’m proud and excited about all of our projects, which mostly deal with “small” questions in the Bolivian context. Popularizing RCTs in the development context seems to me to be the best way to improve the goods and services being offered to the world’s poor. RCTs may not teach us the answers to many big questions, but they are still an important tool that academics should embrace and promote.
Companies all over the world do randomized experiments in order to learn what sorts of services they should offer. Google bases practically all of its decisions on randomized experiments, and credit card companies such as Capital One also use the methodology. While NGOs (or governments) are more interested in poverty alleviation than profit margins, the research methods that best serve private companies can and should be adapted to the needs of the development community. Each specific project might not have an impact on the lives of millions of people (sometimes some do), but the overall effect of emphasizing RCTs can be huge if projects and organizations all over the globe start to really think about the best way to accomplish their mission. Billions of dollars are spent on development; we have a responsibility to try to optimize the way that the money is spent. There are absolutely other ways of pursuing this goal– natural/lab experiments, theory, and so on – but RCTs are a valuable tool that are being underused because they are hard to execute and seldom produce newsworthy results.
It’s important for academics to be involved with RCTs for two reasons. Fancy letterhead leads credibility to an evaluation technique that many organizations in the developing world may not understand. Most of the organizations that I’ve dealt with are in favor of introducing and evaluating innovative projects, but aren’t sure they want or need to so in a rigorous manner. Offering a particular service in one city but not another that is vaguely similar, or to only the best clients are not particularly effective ways of evaluation, and yet both have been suggested to me as replacements for RCTs. Doing such things is easier to implement and probably less expensive, but they aren’t nearly as useful. Organizations like IPA can get funding because our PIs are smart and qualified, and we’ve demonstrated that we develop creative projects which we apply in intelligent and careful ways. As a result of being affiliated with an academic organization like IPA it’s much easier for people in the field (like me!) to demonstrate that we can help figure out new and better projects, and show why it’s important to evaluate innovations with the best methods available.
It’s also important for academics to work with RCTs because often it’s unclear what type of intervention should be tried. For instance, much of IPA’s work is about trying to figure out how to improve health in the developing world. We don’t necessarily need a generalized theory about what works in general, but specific and carefully thought out projects which apply to and can potentially improve what’s being offered in particular context. Psychological and behavioral economic work has produced theories of human behavior, and much of the work of modern development is figuring out the best way to apply those theories to development. Academics are probably the most qualified to think of ways to do that, and RCTs are probably the best way to test what they come up with. Just because RCTs don’t often lead to advances in general academic theory doesn’t mean that they don’t lead to important advances in development.
Here in Bolivia, through a partnership with a local MFI with a social mission, we’re focusing on the impact of changing the size of communal banks. We know that forcing people to form groups of a billion people is a terrible idea; no one would ever manage to get a loan. The right answer –the right range of guarantee sizes – is obviously less than that. Finding the answer is an empirical task. The bank originally only allowed associations of 15-20 people who all guaranteed each other. We wanted to see the impact of shrinking the guarantee to only 5-7 people, because our PIs realized that potentially good microfinance clients might be scared off by having to guarantee so many others (and often several strangers). The bank agreed that the innovation (a semi-replication of a study in the Philippines) was promising and proper evaluation was important. The results of our study so far have been promising, and we’re continuing to work with the MFI to try to identify an optimal size for their communal banks. They hadn’t been considering that tactic, and are happy that IPA helped them introduce a more flexible product which many clients clearly prefer. They’re considering using RCTs for their own innovations, which is hugely important step to ensure that they offer the best services possible.
Will the results of the study ever get published in a peer-reviewed journal or lead to advances in knowledge about fundamental human behavior? Almost certainly not. Will it lead to change in the dominant lending paradigm worldwide? Probably not. Will it lead to change in the Bolivian microfinance context? Maybe. Our partner organization wants to offer the best services possible to their clients, and being familiar with RCTs is important to that goal. Furthermore, they (and other MFIs) have access to our results, and can decide if they have been determining the right association size. Otherwise they basically have to guess what the best product would be, and I think that their work is too important for that.