Newsletters




The Ethics of AI


Imagine you are standing by a railway track near a lever that switches between two sets of tracks. A runaway rail trolley is heading toward the fork in the tracks, and five people are trapped on the currently activated line. You could switch the trolley to the alternative track, but there is a single person trapped there as well. Do you switch the trolley? On one hand, if you take no action five people will die. On the other hand, if you switch the tracks, you must potentially take responsibility for the one person who loses their life.

This “trolley problem” is a thought experiment designed to investigate various moral dilemmas, particularly the consequences of action versus inaction.  Variations on the trolley problem probe assumptions about the value of various types of lives: for instance, by placing an old person on one of the tracks and a disabled person on the other.

Thankfully, none of us must wrestle with these moral dilemmas in our everyday lives. However, the artificial intelligence community is increasingly wrestling with similar moral conundrums implicit in the ever-more pervasive algorithms that underlie much of our technological infrastructure. For instance, there is a strong parallel between the algorithmic choices made by self-driving cars and the trolley problem. In the case of an emergency, should the car swerve into a smaller number of pedestrians, even if that means violating right-of-way laws? Should a self-driving car value the lives of its passengers over the lives of bystanders? Should programmers be coding in judgments about the relative value of lives, such as prioritizing the safety of children?

These moral quandaries are not exclusive to self-driving cars. Recently, there’s been an increasing concern about the prevalence of “fake news”—content that has the appearance of journalism, but which has no factual merit. The neutral algorithms of Facebook and Twitter allowed popular and sensational fake news stories to displace real news, which may, in turn, have affected the outcome of the recent U.S. presidential election.

The obvious intent of those who coded Facebook and Twitter ranking algorithms was to eliminate any built-in bias by allowing popularity to exclusively drive article rankings. However, while eliminating any editorial weighting of newsfeeds might seem the most unbiased approach, if it results in the widespread dissemination of false information, then the consequences could be dire.  It’s a real-world equivalent of the trolley problem; inaction might allow you to feel less responsible, but your inaction might have terrible consequences.

As Cathy O’Neil argues in her book, Weapons of Math Destruction, algorithmic approaches to decision making—including and perhaps especially those based on machine learning—appear to be more impartial and objective, but often they encapsulate and institutionalize biases and prejudices of a system. For instance, algorithms designed to predict recidivism rates of criminals are increasingly used to determine parole. However, these algorithms are often trained using data from our imperfect world, which contains significant prejudice and injustice. Once established, these algorithms can help enshrine the very prejudices they seek to eliminate.

Collective intelligence—a sort of computer mediated groupthink—successfully drives Google search predictions and many recommendation systems. Yet, it can also go disastrously wrong. The Microsoft “Tay” chatbot was designed as a virtual Twitter personality whose behavior would adapt in response to interactions with real users. Within 24 hours, Tay was generating racist hate speech and had to be taken offline. The “wisdom of the crowd” is not reliable. 

To date, software engineers have been relatively unaffected by liability for software faults. It’s hard to think of an example in which a bug in a software program has resulted in significant criminal penalties. This may be about to change in a world in which algorithms have increasingly significant real-world implications. Modern software engineers and architects are increasingly going to need to take responsibility for the implications of their algorithms. Perhaps it is time for an AI code of ethics?


Sponsors