Newsletters




David Weinberger Considers the Benefits and Risks of AI in Data Summit 2018 Keynote


Data Summit 2018 kicked off this week with a keynote by David Weinberger, senior researcher, Harvard's Berkman Center for Internet & Society, titled “Once We Know Everything … or Suppose AI is Right?”

Throughout history, it has been the goal of people to use tools to “anticipate and narrow” by taking information, and lessons learned from the past in order to control and prepare for the possibilities that may be encountered again so they can limit risk and increase the potential for success, said Weinberger.

With the arrival of big data, machine learning, data interoperability, and all-to-all connections, machines are changing long-established concepts of what we know and what is able to be known.

AI turns that approach on its head, changing the strategy to “un-anticipate” and “enlarge.”

Weinberger looked at mankind’s attempts to understand the world, citing great philosophers and scientists, including Isaac Newton whose gift to us, he said, was not just the laws he discovered, but the model that said there is a set of laws that are universal and are simple enough for humans to understand and use, and that these laws apply everywhere, the same way to everything.

Thanks to Newton, but actually against his will, the universe was looked upon clockwork: with each part clear and simple, working according to uniform laws that we can follow.

Artificial intelligence and machine learning on the other hand is difficult or even impossible to understand or rationalized. “Black boxes” take massive amounts of data, run it through multiple passes of a neural network in the case of deep learning, and find probabilistic relationships among the millions of nodes. This causes fear and the deeply unsettling question of whether machines’ thinking may be right and ours is wrong.

In addition, Weinberger also points out, AI raises issues of fairness. Machine learning uses existing data, which means the data is biased because all cultures are biased.

Still, machine learning, and especially deep learning presents opportunities that will have to be leveraged, said Weinberger who cited examples of positive results, such as the work of Mt. Sinai Hospital in NYC which created a project called Deep Patient that has found correlations that let it predict the onset of some diseases more accurately than ever, and some diseases, such as schizophrenia, for the first time at all.

AI poses many challenges, not-the-least of which are moral and ethical issues. The new model of models doesn’t feel the need to reduce complexity to simple rules and laws and it may not be understandable. It may be, in effect, a working model without a conceptual behind it, and it may not be able to provide an explanation that humans can accept for how it arrived at a given answer or prediction, said Weinberger.

While it has been suggested that AI must be made to be understandable to humans in order to make it “explainable,” it is possible that this could result in less favorable outcomes, such as a smaller reduction of traffic deaths using self-driving cars, or fewer successes in identifying patients who may be at risk for certain illnesses. As a result, limiting AI to what humans can understand and confirm and, thus limiting its potential, will perhaps present its own moral dilemma.

Data Summit 2019, presented by DBTA and Big Data Quarterly, is tentatively scheduled for May 21-22, 2019, at the Hyatt Regency Boston with pre-conference workshops on May 20.

Many presentations from Data Summit 2018 have been made available for review at www.dbta.com/DataSummit/2018/Presentations.aspx.


Sponsors