Editor's Note: What would you rather do in the midst of a heat wave than read about social science methodological debates?
1. Meta-Analysis of Worms: When the dust settled in last year's #wormwars it was clear that a core issue was methodological and interpretive differences between epidemiologists and economists (see Humphrey's section 5). A new meta-analysis of deworming impact studies from Croke, Hicks, Hsu, Kremer and Miguel takes that issue head-on: it's as much an argument about how to evaluate evidence as it is an argument about the evidence on deworming in particular, concluding with, "Under-powered meta-analyses are common in health research..."
2. Police Shootings: Another raging methodological debate on an issue of even greater emotional resonance broke out this week: are African-Americans more likely to be shot by police than whites? Roland Fryer has a new working paper that answers, "No [in some cities, though they are more likely to be physically accosted during a stop]." The initial critical reactions focused primarily on the fact that this is a working paper and not enough emphasis in reporting on the paper was given to the limited context (e.g. only a limited number of cities) of the results. The larger methodological issue though is about how to treat the data in the first place. Michelle Phelps looks at how bias in who gets stopped by police can substantially bias outcomes and puts the findings in context of other research. Radley Balko looks at how the source of the data--police reports--makes it questionable whether the data can be trusted at all.
3. Charter Schools: Completing the trifecta of emotionally resonant issues, how about some controversy over how to evaluate schools? The New York Times had a front page story about "chaos" resulting from Detroit's expansion of charter schools harming students, with this curious sentence: "But half the charters perform only as well, or worse than, Detroit's traditional public schools." Jay P. Greene argues that the piece misuses the little data on charter performance that it has. Here's an old post from Alexander Berger on understanding charter performance evaluations and what they actually measure. Meanwhile, here's David Evans rounding up some recent global research on education, teachers and how to measure them.
4. Study Design: Speaking of what studies are measuring, Bruce Wydick has a new post, with specific emphasis on microcredit impact evaluations, about how infrequently development impact evaluations start with a diagnosis of a problem before prescribing a treatment--and how to design better studies based on diagnosis.
5. Prediction Markets: Finally, everyone's (n=1) favorite prolific blogger on statistics and causal inference, Andrew Gelman, has a couple of posts about why prediction markets and polls are diverging, with prediction markets seemingly on the losing end of accuracy.