1. Algorithmic Overlords (+ Banking + Digital Finance + Global Development) book review: I'd like to call myself prescient for bringing Amar Bhide into last week's faiV headlined by questions about the value of banks. Little did I know that he would have a piece in National Affairs on the value of banks, Why We Need Traditional Banking. The reason to read the (long) piece is his perspective on the important role that efforts to reduce discrimination through standardization and anonymity played in the move to securitization. Bhide names securitization as the culprit for a number of deleterious effects on the banking system and economy overall (with specific negative consequences for small business lending).
The other reason to read the piece is it is a surprisingly great complement to reading Automating Inequality, the new book from Virginia Eubanks. To cut to the chase, it's an important book that you should read if you care at all about the delivery of social services, domestically or internationally. But I think the book plays up the technology angle well beyond it's relevance, to the detriment of very important points.
The subtitle of the book is "how high-tech tools profile, police and punish the poor" but the root of almost all of the examples Eubanks gives are a) simply a continuation of policies in place for the delivery of social services dating back to, well, the advent of civilization(?), and b) driven by the behaviors of the humans in the systems, not the machines. In a chapter about Indiana's attempt to automate much of its human services system, there is a particularly striking moment where a woman who has been denied services because of a technical problems with an automated document system receives a phone call from a staffer who tries very hard to convince her to drop her appeal. She doesn't, and wins her appeal in part because technology allowed her to have irrefutable proof that she had provided the documents she needed to. It's apparent throughout the story that the real problem isn't the (broken) automation, but the attitudes and political goals of human beings.
The reason why I know point a) above, though, is Eubanks does such an excellent job of placing the current state in historical context. The crucial issue is how our service delivery systems "profile, police and punish" the poor. It's not clear at all how much the "high tech tools" are really making things worse. This is where Bhide's discussion is useful: a major driver toward such "automated" behaviors as using credit scores in lending was to do an end-run around the discrimination that was rampant among loan officers (and continues to this day, and not just in the US). While Eubanks does raise the question of the source of discrimination, in a chapter about Allegheny County, PA, she doesn't make a compelling case that algorithms will be worse than humans. In the discussion on this point she even subtly undermines her argument by judging the algorithm by extrapolating false report rates from a study conducted in Toronto. This is the beauty and disaster of human brains: we extrapolate all the time, and are by nature very poor judges of whether those extrapolations are valid. In Allegheny County, according to Eubanks telling, concern that case workers were biased in the removal of African-American kids from their homes was part of the motivation for adopting automation. They are not, it turns out. But there is discrimination. The source is again human beings, in this case the ones reporting incidents to social services. The high-tech is again largely irrelevant.
I am particularly sensitive to these issues because I wrote a book in part about the Toyota "sudden acceleration" scare a few years ago. The basics are that the events described by people who claim "sudden acceleration" are mechanically impossible. But because there was a computer chip involved, many many people were simply unwilling to consider that the problem was the human being, not the computer. There's more than a whiff of this unjustified preference for human decision-making over computers in both Bhide's piece and Eubanks book. For instance, one of the reasons Eubanks gives for concern about automation algorithms is that they are "hard to understand." But algorithms are nothing new in the delivery of social services. Eubanks uses a paper-based algorithm in Allegheny County to try to judge risk herself--it's a very complicated and imprecise algorithm that relies on a completely unknowable human process, that necessarily varies between caseworkers and even day-to-day or hour-to-hour, to weight various factors. Every year I have to deal with social services agencies in Pennsylvania to qualify for benefits for my visually impaired son. I suspect that everyone who has done so here or any where else will attest to the fact that there clearly is some arcane process happening in the background. When that process is not documented, for instance in software code, it will necessarily be harder to understand.
To draw in other examples from recent faiV coverage, consider two papers I've linked about microfinance loan officer behavior. Here, Marup Hossain finds loan officers incorporating information into their lending decisions that they are not supposed to. Here, Roy Mersland and colleagues find loan officers adjusting their internal algorithm over time. In both cases, the loan officers are, according to some criteria, making better decisions. But they are also excluding the poorest, even profiling, policing and punishing them, in ways that are very difficult to see. While I have expressed concern recently about LenddoEFL's "automated" approach to determining creditworthiness, at least if you crack open their data and code you can see how they are making decisions.
None of which is to say that I don't have deep concerns about automation and our algorithmic overlords. And those concerns are in many ways reinforced and amplified by Eubanks book. While she is focused on the potential costs to the poor of automation, I see two areas that are not getting enough scrutiny.
First, last week I had the chance to see one of Lant Pritchett's famous rants about the RCT movement. During the talk he characterized RCTs as "weapons against the weak." The weak aren't the ultimate recipients of services but the service delivery agencies who are not politically powerful enough to avoid scrutiny of an impact evaluation. There's a lot I don't agree with Lant on, but one area where I do heartily agree is his emphasis on building the capability of service delivery. The use of algorithms, whether paper-based or automated, can also be weapons against the weak. Here, I look to a book by Barry Schwarz, a psychologist at Swarthmore perhaps most well-known for The Paradox of Choice. But he has another excellent book, Practical Wisdom, about the erosion of opportunities for human beings to exercise judgment and develop wisdom. His book makes it clear that it is not only the poor who are increasingly policed and punished. Mandatory sentencing guidelines and mandated reporter statutes are efforts to police and punish judges and social service personnel. The big question we have to keep in view is whether automation is making outcomes better or worse. The reasoning behind much of the removal of judgment that Schwartz notes is benign: people make bad judgments; people wrongfully discriminate. When that happens there is real harm and it is not obviously bad to try to put systems in place to reduce unwitting errors and active malice. It is possible to use automation to build capability (see the history of civilization), but it is far from automatic. As I read through Eubanks book, it was clear that the automated systems were being deployed in ways that seemed likely to diminish, not build, the capability of social service agencies. Rather than pushing back against automation, the focus has to stay on how to use automation to improve outcomes and building capability.
Second, Eubanks makes the excellent point that while poor families and wealthier families often need to access similar services, say addiction treatment, the poor access them through public systems that gather and increasingly use data about them in myriad ways. One's addiction treatment records can become part of criminal justice, social service eligibility, and child custody proceedings. Middle class families who access services through private providers don't have to hand over their data to the government. This is all true. But it neglects that people of all income levels are handing over huge amounts of data to private providers who increasingly stitch all of that data together with far less scrutiny than public agencies are potentially subject to. Is that really better? Would the poor be better off if their data was in the hands of private companies? It's an open question whether the average poor person or the average wealthy person in America has surrendered more personal data--I lean toward the latter simply because the wealthier you are the more likely you are to be using digital tools and services that gather (and aggregate and sell) a data trail. The key determinant of what happens next isn't, in my mind, whether the data is held by government or a private company, but who has the power to fight nefarious uses of that data. Yes, the poor are often going to have worse outcomes in these situations but it's not because of the digital poorhouse, it's because of the lack of power to fight back. But they are not powerless--Eubanks stories tend to have examples of political power reigning in the systems. As private digital surveillance expands though, the percentage of the population who can't fight back is going to grow.
So back to the bottom line. You should read Automating Inequality. You will almost certainly learn a lot about the history of poverty policy in the US and what is currently happening in service delivery in the US. You will also see lots to be concerned about in the integration of technology and social services. But hopefully you'll also see that the problem is the people.