No, AI is not for social good

2019/11/25 Innoverview Read

Faced with the public furor over problems with artificial intelligence, tech companies and researchers would now have us believe that the big fix for those problems is to develop AI for social good.

This proposal is not new; it’s the latest in a long line of bold, mostly overreaching claims about technology’s capability to do social good. In his 2012 book The Master Switch, Tim Wu makes the case that, in the beginning, television was supposed to change the world by making information freely available. In the ’90s, tele-centers were supposed to transform education in developing countries. During the Arab Spring, we heard that social media was the loudspeaker of democracy. Put succinctly, these positions argue that the presence alone of certain technologies can effect social change — that, for example, with smartphones poverty will disappear.

Likewise, the promise of “AI for the good” ignores the fact that problems like poverty, recidivism, and the distribution of resources are political ones; they’re often the results of institutional failure. Technologies, when not aimed at the root of problems, divert our attention. On top of that, do we really want to leave big tech to “solve” these social problems when it has shown it’s capable of creating substantial social problems of its own — I’m thinking here of Facebook with its Cambridge Analytica deal, for example.

On one hand, it seems obvious that AI will make things better. It is like electricity, right? It is bringing new types of jobs, it is creating automation that will bring us all more leisure time, and it will solve historically under-funded problems because it can do what many people would do, but for cheap.

The reality is more complicated. In order to create some statistical models, which form the core of AI systems, you must use more energy than what the average American household consumes in one year. Behind the scenes of AI systems, like those that let driverless cars “see,” are thousands of low-paid laborers labeling millions of images. These folks complete tasks like “outline the truck in this picture” for hours on end. And far from a perfect tool, AI has managed to quietly assume the same kinds of biases humans are prone to.

These realities of AI seem at odds with it being a tool for social good.

So, when working for “the good,” we must ask a few questions: Which good and for whom? Is it only AI that can do this good?

Facebook, Google, Microsoft, and many others have begun to market their efforts along the lines of “AI for social good.” None offers a concrete justification of what makes these projects good. Instead, by implication, they mean to say that simply working on energy, health, or criminal justice, for example, is enough.

We might disagree with this definition of good. For example, one center at the University of Southern California (USC) works to “demonstrate how AI can be used to tackle the most difficult societal problems.” Yet some of its projects attempt to apply machine learning to better allocate L.A. anti-terrorism resources, and one aims to identify whether certain crimes in L.A. are gang related. As Ben Green describes, this latter effort ignores the racialized history and practice of policing in Los Angeles and raises serious concerns regarding the perpetuation of the 1990s myths of “superpredators.”

When considering such projects for what they are, we begin to see less-glamorous issues appear — issues about bias, representation, and accountability. In their projects, the USC researchers act in a way that upholds, as opposed to questions, the status quo on crime and terrorism. Indeed, their decision to work on gang-prediction and anti-terrorism projects is political, a fact lost when the projects are obfuscated with terms like “social good.”

These vague definitions of good, combined with sloppy ideas about AI, make us uncritical of what is happening with both technology, and, more importantly, the use of money.

That is not to say that we should not develop AI technologies. This past year, I worked at the Mumbai-based Wadhwani Institute for Artificial Intelligence to apply AI technologies in support of some of the communities most in need. In May, we won a Google AI Impact Challenge award for work to help smallholder farmers identify crop-destroying pests. I am confident in the good of this work. Similarly, my peers use AI to better estimate the distribution of poverty, to model the spread of infectious diseases, and to identify hate speech in online communities. There is clearly a role for AI technologies to do things that I would argue are good.

Whether teaching computing ethics at the University of Washington or explaining tech to a room of philosophers, I keep running into these political questions. Of course, what’s good to me is not good to everyone.

That said, the solution is not to avoid the inherently political discussion of how to define “good” but to embrace these discussions and to have a say in how technology and large corporations impact our lives.

Without addressing the political implications of AI technologies, calling them “good” will only generate hype and feelings of job-well-done. That is not good enough. We must challenge each technology’s creator to be clear about how they define social good. There is a role for AI technologies in creating a better world, one in which we realize commitments to doing good. Let us be honest about it.

(Source: VentureBeat  https://venturebeat.com/2019/11/23/no-ai-is-not-for-social-good/)