About a year ago, two papers made waves in the microfinance community. They were the first randomized control trials (RCTs) of expanding access to credit, and neither found evidence for the kind of impacts most people had come to expect, fairly or not, from microfinance. The results were somewhat surprising, but the power of these studies—and the reason they got so much airtime—was in their methodological approach. As RCTs, they established (or failed to establish) causal connections between access to credit and outcomes like household income that other, less rigorous types of studies only suggest.
RCTs are increasingly used to study development programs. It’s the method of choice for researchers at 3ie, the World Bank Evaluation Facility, and JPAL, in addition to us and our IPA colleagues. They gained recognition last year, when JPAL’s Esther Duflo was awarded a prestigious MacArthur Fellowship for her work.
So why are RCTs different? And if they’re so powerful, why don’t we see more of them? A new FAI Framing Note by Jonathan Bauchet and Jonathan Morduch helps make sense of these questions.
Designing and implementing RCTs is difficult—that they require technical expertise is one reason why they aren’t more common. They also require circumstances that allow for randomization, like a program pilot or expansion, and the conclusions that can be drawn from them may be limited. RCTs measure average impacts within a specifically defined population, and we can’t assume that the results apply to other people in other parts of the world.
Still, when it comes to having confidence in the results, there’s no substitute for RCTs. Bauchet and Morduch’s Framing Note tells the full story of their advantages and drawbacks. It also describes four randomized experiments in microfinance: the two mentioned above on expanding access to credit, a similar one in a different setting, and a fourth on returns to capital in microenterprises.