1. Global Development: One of the more encouraging trends in development economics as far as I'm concerned is the growth of long-term studies that report results not just once but on an on-going basis. Obviously long-term tracking like the Young Lives Project or smaller scale work like Robert Townsend's tracking of a Thai village (which continues to yield valuable insights) falls in this category, but it's now also happening with long term follow-up from experimental studies. Sometimes that takes the form of tracking down people affected by earlier studies, as Owen Ozier did with deworming in Kenya. But more often it seems, studies are maintaining contact over longer time frames. A few weeks ago I mentioned a new paper following up on Bloom et. al.'s experiment with Indian textile firms. The first paper found significant effects of management consulting in improving operations and boosting profits. The new paper sees many, but not all, of those gains persist eight years later. Another important example is the on-going follow up of the original Give Directly experiment on unconditional cash transfers. Haushofer and Shapiro have new results from a three year follow-up, finding that as above, many gains persist but not all and the comparisons unsurprisingly get a bit messier.
Although it's not quite the same, I do feel like I should include some new work following up on the Targeting the Ultra Poor studies--in this case not of long-term effects but on varying the packages and comparing different approaches as directly as possible. Here's Sedlmayr, Shah and Sulaiman on a variety of cash-plus interventions in Uganda--the full package of transfers and training, only the transfers, transfers with only a light-touch training and just attempting to boost savings. They find that cash isn't always king: the full package outperforms the alternatives.
2. Our Algorithmic Overlords: If you missed it, yesterday's special edition faiV was a review of Virginia Eubanks Automating Inequality. But there's always a slew of interesting reads on these issues, contra recent editorials that no one is paying attention. Here's NYU's AINow Institute on Algorithmic Impact Assessments as a tool for providing more accountability around the use of algorithms in public agencies. While I tend to focus this section on unintended negative consequences of AI, there is another important consideration: intended negative consequences of AI. I'm not talking about SkyNet but the use of AI to conduct cyberattacks, create fraudulent voice/video, or other criminal activities. Here's a report from a group of AI think tanks including EFF and Open AI on the malicious use of artificial intelligence.
3. Interesting Tales from Economic History: I may make this a regular item as I tend to find these things quite interesting, and based on the link clicks a number of you do too. Here's some history to revise your beliefs about the Dutch Tulip craze, a story it turns out that has been too good to fact check, at least until Anne Goldgar of King's College did so. And here's work from Judy Stephenson of Oxford doing detailed work on working hours and pay for London construction workers during the 1700s. Why is this interesting? Because it's important to understand the interaction of productivity gains, the industrial revolution, wages and welfare--something that we don't know enough about but has implications as we think about the future of work, how it pays and the economic implications for different levels of skills. And in a different vein, but interesting none-the-less, here is an epic thread from Pseudoerasmus on Steven Pinker's new book nominally about the Enlightenment.
Viewing all posts with tag: Operations
Book Review Special Edition: Automating Inequality
1. Algorithmic Overlords (+ Banking + Digital Finance + Global Development) book review: I'd like to call myself prescient for bringing Amar Bhide into last week's faiV headlined by questions about the value of banks. Little did I know that he would have a piece in National Affairs on the value of banks, Why We Need Traditional Banking. The reason to read the (long) piece is his perspective on the important role that efforts to reduce discrimination through standardization and anonymity played in the move to securitization. Bhide names securitization as the culprit for a number of deleterious effects on the banking system and economy overall (with specific negative consequences for small business lending).
The other reason to read the piece is it is a surprisingly great complement to reading Automating Inequality, the new book from Virginia Eubanks. To cut to the chase, it's an important book that you should read if you care at all about the delivery of social services, domestically or internationally. But I think the book plays up the technology angle well beyond it's relevance, to the detriment of very important points.
The subtitle of the book is "how high-tech tools profile, police and punish the poor" but the root of almost all of the examples Eubanks gives are a) simply a continuation of policies in place for the delivery of social services dating back to, well, the advent of civilization(?), and b) driven by the behaviors of the humans in the systems, not the machines. In a chapter about Indiana's attempt to automate much of its human services system, there is a particularly striking moment where a woman who has been denied services because of a technical problems with an automated document system receives a phone call from a staffer who tries very hard to convince her to drop her appeal. She doesn't, and wins her appeal in part because technology allowed her to have irrefutable proof that she had provided the documents she needed to. It's apparent throughout the story that the real problem isn't the (broken) automation, but the attitudes and political goals of human beings.
The reason why I know point a) above, though, is Eubanks does such an excellent job of placing the current state in historical context. The crucial issue is how our service delivery systems "profile, police and punish" the poor. It's not clear at all how much the "high tech tools" are really making things worse. This is where Bhide's discussion is useful: a major driver toward such "automated" behaviors as using credit scores in lending was to do an end-run around the discrimination that was rampant among loan officers (and continues to this day, and not just in the US). While Eubanks does raise the question of the source of discrimination, in a chapter about Allegheny County, PA, she doesn't make a compelling case that algorithms will be worse than humans. In the discussion on this point she even subtly undermines her argument by judging the algorithm by extrapolating false report rates from a study conducted in Toronto. This is the beauty and disaster of human brains: we extrapolate all the time, and are by nature very poor judges of whether those extrapolations are valid. In Allegheny County, according to Eubanks telling, concern that case workers were biased in the removal of African-American kids from their homes was part of the motivation for adopting automation. They are not, it turns out. But there is discrimination. The source is again human beings, in this case the ones reporting incidents to social services. The high-tech is again largely irrelevant.
I am particularly sensitive to these issues because I wrote a book in part about the Toyota "sudden acceleration" scare a few years ago. The basics are that the events described by people who claim "sudden acceleration" are mechanically impossible. But because there was a computer chip involved, many many people were simply unwilling to consider that the problem was the human being, not the computer. There's more than a whiff of this unjustified preference for human decision-making over computers in both Bhide's piece and Eubanks book. For instance, one of the reasons Eubanks gives for concern about automation algorithms is that they are "hard to understand." But algorithms are nothing new in the delivery of social services. Eubanks uses a paper-based algorithm in Allegheny County to try to judge risk herself--it's a very complicated and imprecise algorithm that relies on a completely unknowable human process, that necessarily varies between caseworkers and even day-to-day or hour-to-hour, to weight various factors. Every year I have to deal with social services agencies in Pennsylvania to qualify for benefits for my visually impaired son. I suspect that everyone who has done so here or any where else will attest to the fact that there clearly is some arcane process happening in the background. When that process is not documented, for instance in software code, it will necessarily be harder to understand.
To draw in other examples from recent faiV coverage, consider two papers I've linked about microfinance loan officer behavior. Here, Marup Hossain finds loan officers incorporating information into their lending decisions that they are not supposed to. Here, Roy Mersland and colleagues find loan officers adjusting their internal algorithm over time. In both cases, the loan officers are, according to some criteria, making better decisions. But they are also excluding the poorest, even profiling, policing and punishing them, in ways that are very difficult to see. While I have expressed concern recently about LenddoEFL's "automated" approach to determining creditworthiness, at least if you crack open their data and code you can see how they are making decisions.
None of which is to say that I don't have deep concerns about automation and our algorithmic overlords. And those concerns are in many ways reinforced and amplified by Eubanks book. While she is focused on the potential costs to the poor of automation, I see two areas that are not getting enough scrutiny.
First, last week I had the chance to see one of Lant Pritchett's famous rants about the RCT movement. During the talk he characterized RCTs as "weapons against the weak." The weak aren't the ultimate recipients of services but the service delivery agencies who are not politically powerful enough to avoid scrutiny of an impact evaluation. There's a lot I don't agree with Lant on, but one area where I do heartily agree is his emphasis on building the capability of service delivery. The use of algorithms, whether paper-based or automated, can also be weapons against the weak. Here, I look to a book by Barry Schwarz, a psychologist at Swarthmore perhaps most well-known for The Paradox of Choice. But he has another excellent book, Practical Wisdom, about the erosion of opportunities for human beings to exercise judgment and develop wisdom. His book makes it clear that it is not only the poor who are increasingly policed and punished. Mandatory sentencing guidelines and mandated reporter statutes are efforts to police and punish judges and social service personnel. The big question we have to keep in view is whether automation is making outcomes better or worse. The reasoning behind much of the removal of judgment that Schwartz notes is benign: people make bad judgments; people wrongfully discriminate. When that happens there is real harm and it is not obviously bad to try to put systems in place to reduce unwitting errors and active malice. It is possible to use automation to build capability (see the history of civilization), but it is far from automatic. As I read through Eubanks book, it was clear that the automated systems were being deployed in ways that seemed likely to diminish, not build, the capability of social service agencies. Rather than pushing back against automation, the focus has to stay on how to use automation to improve outcomes and building capability.
Second, Eubanks makes the excellent point that while poor families and wealthier families often need to access similar services, say addiction treatment, the poor access them through public systems that gather and increasingly use data about them in myriad ways. One's addiction treatment records can become part of criminal justice, social service eligibility, and child custody proceedings. Middle class families who access services through private providers don't have to hand over their data to the government. This is all true. But it neglects that people of all income levels are handing over huge amounts of data to private providers who increasingly stitch all of that data together with far less scrutiny than public agencies are potentially subject to. Is that really better? Would the poor be better off if their data was in the hands of private companies? It's an open question whether the average poor person or the average wealthy person in America has surrendered more personal data--I lean toward the latter simply because the wealthier you are the more likely you are to be using digital tools and services that gather (and aggregate and sell) a data trail. The key determinant of what happens next isn't, in my mind, whether the data is held by government or a private company, but who has the power to fight nefarious uses of that data. Yes, the poor are often going to have worse outcomes in these situations but it's not because of the digital poorhouse, it's because of the lack of power to fight back. But they are not powerless--Eubanks stories tend to have examples of political power reigning in the systems. As private digital surveillance expands though, the percentage of the population who can't fight back is going to grow.
So back to the bottom line. You should read Automating Inequality. You will almost certainly learn a lot about the history of poverty policy in the US and what is currently happening in service delivery in the US. You will also see lots to be concerned about in the integration of technology and social services. But hopefully you'll also see that the problem is the people.
Week of October 9, 2017
1. Evidence-Based Policy: Yesterday I was at a workshop hosted at Yale SOM and funded by the Hewlett Foundation on how to better connect evidence to policy. The workshop was part of a bigger project and a series of reports are coming that I will share when they are available. There was a lot of good discussion, but I thought I would share two thoughts that I find to be missing appropriate weight in evidence-based policy discussions. First, there is often discussion of a mismatch in the time horizons of researchers, implementers and policy makers. While this is no doubt true, the mismatch between those groups is trivial in comparison to the mismatch all those groups have with the amount of time it takes for change that people can feel to occur. Deworming's important effects--on earnings, not school attendance--are only felt decades after treatment. Moving to Opportunity similarly has a decade-scale effect. Few if any of the researchers, implementers or policymakers are still going to be around when the world really is undeniably different because of them.
Which brings me to the second point. The enterprise of evidence-based policy is grounded in marginal improvements across large groups of people--and that's a good thing! I'm a big believer in the value of marginal improvements (QED). But people have a really, really hard time noticing or caring about marginal improvements. Human beings prefer stories about big changes for a few people with unclear causality a lot more than they do about marginal gains with sound causal inference. I'm more and more convinced (because of evidence!) that hope is a key ingredient for even marginal impact, but hope comes from Queen of Katwe, not from 1/10 a standard deviation improvement in average test scores. So the unanswered question for me in this conversation is, "How do we manage the tension between the policies that are good for people and the policies that people want?"
In other evidence-based policy news, here's a rumination on the difficulty of applying research to practice in democratization (specifically Myanmar). And here's Andrew Gelman on not waiting for peer review, particularly in Economics, to start putting evidence into practice.
2. Evidence-Based Operations: OK, so there's one more thought: the gap between policy and research, and operations. But rather than a long discussion on that topic, here's a very good new piece on the operational choices of front-line social workers and the gap between policy (whether evidence-based or not) and practice. The challenge in the spotlight is not the Marxist-style view of workers dissociated from their work by rules but workers dissociated because of having too many morally-fraught choices. More light-heartedly, here's a piece that illustrates how hard it is to go from evidence to operational choices, as reflected through the failure of the US men's soccer team (I told you it would return). There is growing attention to front-line staff and the "product" as actually experienced by the beneficiary in impact evaluations, but much more is needed as far as I'm concerned.
3. Our Algorithmic Overlords: Speaking of operations, one of the areas where more attention is needed is the way that operations are being instantiated into algorithms that are opaque or entirely invisible. Ruben Mancha and Haslina Ali argue that that the unexamined algorithm is not worth using. Of course, they are arguing from ethics, not from business profits, where it's abundantly clear that unexamined algorithms are worth using.
Here's a piece about technology-related predictions from Gartner, a tech industry research and advisory company. Skip the first three to see some striking predictions about AI-generated false information, such as that people in "mature economies will consume more false information than true information." There's a threat to advancing evidence-based policy that definitely wasn't on the agenda yesterday. I started my career at Gartner way back in 1995 and I remember one of the first things we were given to read was an an article in Scientific American about the coming age of fake photography and video. Apparently that future has finally arrived.
Week of June 12, 2017
1. St. Monday, American Inequality and Class Struggle: One of my favorite things about writing the faiV is when I get the chance to point readers to something they would likely never come across otherwise. So how about a blog post from a woodworking tool vendor about 19th century labor practices, craft unions and the gig economy? Once you read that, you'll want to remind yourself about this piece from Sendhil Mullainathan about employment as a commitment device (paper here), and this paper from Dupas, Robinson and Saavedra on Kenyan bike taxi drivers' version of St. Monday.
Back to modern America, here's Matt Bruenig on class struggle and wealth inequality through the lens of American Airlines, Thomas Picketty and Suresh Naidu. I feel a particular affinity for this item this week having watched American Airlines employees for a solid 12 hours try to do their jobs while simultaneously giving up the pretense that they have any idea what is going on.
2. Our Algorithmic Overlords: Facebook is investing a lot in machine learning and artificial intelligence. Sometimes that work isn't about getting you to spend more time on Facebook...or is it? With researchers at Georgia Tech, Facebook has been working on teaching machines to negotiate by "watching" human negotiations. One of the first things the machines learned was to "deceive." I use quotes here because while it's the word the researchers use, I'm not sure you can use the word deceive in this context. And that's not the only part of the description that seems overly anthropomorphic.
Meanwhile, Lant Pritchett has a new post at CGD that ties together Silicon Valley, robots, labor unions, migration and development. And probably some other things as well. If I read Lant correctly, he would approve of Facebook's negotiating 'bots since negotiation is a scarce and expensive resource (though outsourcing negotiation is filled with principal-agent problems). I guess that means a world where robots are negotiating labor contracts for low- and mid-skill workers would be a better one than the one we're currently in?
3. Statistics, Research Quality and External Validity: Here's another piece from Lant on external validity and multi-dimensional considerations when trying to systematize education evidence. A simpler way to put it: He's got some intriguing 3-dimensional charts that allow for thinking a bit more carefully about likely outcomes of interventions, given multiple factors influence how much a child learns in school. It closely parallels some early conversations I've had for my next book with Susan Athey and Guido Imbens, so I'm paying close attention. And if you can't get enough Lant, you could always check out my current book. Yes, both of those sentences are shameless plugs.
Week of September 14, 2015
1. International Labor Mobility: FAI affiliate Michael Clemens discusses one of “the most effective development policies evaluated to date” and why it’s being ignored by major aid agencies. The Huffington Post
2. Small-Dollar Credit: Nick Bourke of The Pew Charitable Trusts suggests tweaks to the CFPB's proposed small dollar credit regulations that will allow banks to compete with payday lenders with better options for borrowers (hopefully). American Banker
3. Poverty Alleviation: "All programs have room to improve. 'Pro-poor' programs actually strive to improve toward greater effectiveness. Transparency and accountability are not just about separating wheat from chaff; they are about improving.” NextBillion
Week of February 9, 2015
1. Microfinance: A look at the microfinance industry's rebuilding efforts in a post-Ebola West Africa. Devex
2. Unbanked: In what sounds similar to a time-compressed version of this experiment, Accion Venture Lab's Managing Director reports on his experience of being unbanked for a day. Medium
3. Development Valentines: Every February, Love is in the air...and on Twitter. Storify