1. The Search for Truth: The New York Times Magazine has a long piece about Amy Cuddy, the social psychologist of "power posing" fame, and the messy process by which her research has been popularized and then discredited. The piece suggests that Cuddy (though it by no means holds her out as blameless) has been uniquely and personally targeted as the face of unreplicable and bad social science in an era of changing research practices and expectations, perhaps because she is a woman. More broadly it ponders whether the process and social conventions of communication around challenging social science research may do more harm than good. It points specifically to Uri Simonsohn, Joseph Simmons and Andrew Gelman and their roles in both calling out bad social science and in specifically highlighting Cuddy's power posing paper as an example.
It's well worth the long read, careful consideration but also some critical evaluation. The piece comes at a very interesting time, with the Weinstein saga, #MeToo, and more specifically the push back about Econ Job Market Rumors and bad behavior in economics. It's important to read the piece in the context of such things as EJMR and this anecdote from Rohini Pande (in an interview with David McKenzie this week) relating how a "senior male World Bank economist wrote to our senior male colleagues at MIT and Yale asking that they review our work and correct our mistakes" in one of her early papers (with Esther Duflo; see question 4 in the link, but read the whole thing, it's very good on a lot of topics).
But on reflection, I don't think the idea that Cuddy was uniquely targeted or treated more harshly than others holds water. It only appears so to a New York Times reporter because Cuddy's works is the kind that gets broad attention. Remember when Ben Goldacre kicked off "Worm Wars" with an amazingly condescending piece asking people not to point and laugh at Miguel and Kremer for the supposed "errors" in their Worms paper because they shared their data? Or the language and dudgeon around Reinhart and Rogoff's Excel error? Or the intemperate words flowing around the failure to replicate John Bargh's priming work? From another field, here's some pointed language challenging a recent result on gene editing alleging some pretty basic errors.
Of course, the commonality of bad behavior in academic circles doesn't excuse it. But that cuts both ways. Cuddy has also been using this faulty logic in her own defense. As far as I can tell, her main defense has always been "everyone was engaging in bad research practices, so it's not my fault", and that's definitely the implication that the NYT article gives. I don't see much distance between that and people excusing sexual harassment because they were "raised in the '60s and '70s."
Could the practice of social science be better? There's no question, but it's also not clear exactly how, other than the obvious avoidance of misogyny, ad hominem and personal attacks. But that line is difficult to see sometimes because the nature of social science research requires a great deal of personal investment. It's hard not to feel attacked when one's research, quite literally one's life's work, is criticized.
To me, the most thought-provoking part of the NYT piece is when Simmons, reviewing an email he sent to Cuddy about follow-up work on whether the power posing research was reliable, says "that email was too polite" given how serious he thought the problems were. And there is a lot of bad science that needs to be called out. This week, there's yet another update to the Brian Wansink saga--several papers flat out misrepresent who the study participants were (e.g. a paper claiming participants were 8-11 when they were 4-5). Not calling bad science out, I think, is a real contributor to real world problems, like Chief Justice John Roberts being able to call good political science research "sociological gobbledygook."
Here's a Chris Blattman thread on his reactions. Here's Andrew Gelman's response to the NYT piece and for the sake of this topic it is one of the few posts anywhere on the internet where you should read the comments. Someone in one of the Twitter threads wondered about the responsibility of Gelman and other bloggers like Tyler Cowen to police their comments. I'm sympathetic to this idea, but I'm old enough to remember policing comments on my own blog. It's an incredibly time-consuming and soul sucking affair with lots of trade-offs. The "business model" of blogging just doesn't allow it. In fact, in some ways it was the business model required to police commentary, also known as paid journalism, that led to blogging: the gatekeepers of commentary shut out too many voices who should be heard. Science, and the pursuit of truth, is hard.
2. Our Algorithmic Overlords: This isn't as much of a pivot as it might seem. Here's a fairly intemperate piece critiquing the "digital humanities." There's a good bit of whining but it's worth reading because much of the critique applies to the big data and machine learning movement in economics. And the critique is more palatable because it's not directly about those fields, and so no one, in those fields at least, will feel personally attacked. The bottom line is the same as above: even with shiny new tools, big data and algorithms the pursuit of truth is hard.
Now here's a pivot. The New Yorker has a long story (this is apparently the long reads week) about the evolving nature of factory jobs and "robot overlords." I couldn't help thinking about the distinction made in that piece about "premium mediocre" a few weeks ago: employment is bifurcating into jobs where you tell the robot what to do, or jobs where the robot tells you what to do. Still, the most compelling piece about the changing nature of jobs and employment this week isn't about robot overlords, it's this story of a worker in a ball bearing plant in Indiana losing her job. Highly, highly recommended.
Back to our algorithmic overlords. Here's some more in-depth reporting on the creation of a complete surveillance state, including AI, in China's Xinjiang province. They're not just monitoring phone and digital money use, as I noted a few weeks ago. There are now facial recognition cameras at gas pumps.
And finally, here's a chance to change your priors. Remember those papers that said that note-taking on laptops leads to less learning and poor student performance? Here's a paper that rigorously randomizes note-taking technology and finds that there isn't a difference between taking notes by hand and on a laptop, suggesting the earlier findings were primarily selection effects. And we're back to the theme of science being hard.
3. Household Finance: OK, let's take a break from the long reads, but stay with the "quality of research" matters sub-theme. I continue to think financial literacy is the bellwether for whether "evidence-based" policy is making an impact. And apparently it's not. Betsy DeVos, the US Secretary of Education, officially announced this week that financial literacy taught in schools is number 4 of 11 priorities for the department. If only we didn't have good evidence that teaching math has an impact on financial outcomes, but financial literacy doesn't. Or how about some new work from Xavi Gine and colleagues that presenting key facts about financial products helps consumers make better choices than financial literacy does?
Here's some worthwhile reading for how households are dealing with their finances, that should sour anyone on the current financial literacy curricula. Check out these two reviews (one and two) of financial services providers in Google Maps. Remember how distributing food stamps twice a month cuts down on shoplifting in grocery stores? Less spiky delivery of benefits also changes how people spend the money they receive in a positive way. Unlikely that financial literacy in the classroom would have changed this person's perspective on money. Check out some insights from EARN on their users financial challenges and saving behavior.
Of course the most important read on household finance is The Financial Diaries. Here's a new review from Beth Rhyne of that book and Lisa Servon's Unbanking of America.
4. Digital Finance: Now let's tie these last two themes together. China is not only building a panopticon in Xinjiang, it's also ramping up it's efforts to track deadbeat borrowers with a national database and public shaming. I'm sure that's going to go well.
In other credit access news, I've long been a champion of Entrepreneurial Finance Lab, which uses psychometrics to assess credit-worthiness of small business owners, allowing more of them to get access to credit. At the same time, I've long been very wary of Lenddo, which uses alternative data, like social media connections, to assess credit-worthiness of individual borrowers. I've called it, I believe, "a tax on poor people's family ties." I've been able to avoid the cognitive dissonance of these two perspectives until this week. Lenddo and EFL announced that they are merging. Now I really don't know how to feel.
Finally, here's a story about the Gates Foundation funding fintech infrastructure software for interoperability of mobile money platforms? The one thing that's clear here is that the reporter doesn't understand the topic.
5. Global Development: To close us out, and hopefully make you feel slightly better about the state of research, here's a model of reasoned argument and debate on an important topic: will Africa experience a manufacturing boom, as wages creep up in former low-wage countries like Vietnam and Bangladesh. CGD has a paper that says no, because labor costs in many African countries are actually relatively high. That led to a lively debate which CGD has helpfully collated here. And here's a helpful overview of living standards in African cities and rural areas, that has some relevance.