Artificial Intelligence Grows Up in 2020

For several years, AI has been the enfant terrible of the business world, viewed as a technology full of unconventional and controversial behavior that has shocked, provoked, and enchanted audiences worldwide. That’s all going to change. In 2020, AI will grow up, encountering new demands in the areas of responsibility, advocacy, and regulation.


For a few years, I’ve been working on new data science patents, pushing AI technology to be more defensiveexplainable, and ethical. Driven by the ever-rising onslaught of new AI applications—coupled with the fact that regulation around AI explainability, transparency, and ethics is still emerging—there will be higher expectations for responsible AI systems in 2020.

For example, if a medical device—such as a heart pacemaker—were rushed to market, it could be poorly or negligently designed. If people using that device were harmed, there would be liability. The company providing the device could be sued by individuals or groups if a lack of rigor and/or reasonable effort were proven. 

Similarly, there will be a more punitive response for companies that consider explainable, ethical AI to be optional. “Oops! We’ve made a mistake with an algorithm and it’s having a harmful effect” will no longer be the basis of interesting news stories about AI gone rogue, but instead a call to action.

In 2020, AI insurance will become available with companies looking to insure their AI algorithms from liability lawsuits. Using blockchain or other means for auditable model development and model governance will become essential in demonstrating the due diligence necessary in building explaining and testing AI models.


Can AI be harmful? Think about someone being denied rightful entry to a country due to inaccurate facial recognition. Or someone being misdiagnosed by disease-seeking robotic technology. Or someone being denied access to a loan because a new type of credit score on non-causal features rates them poorly. Or someone being incorrectly blamed as the cause of an auto accident by the insurance company mobile app loaded onto their phone.

Already there are numerous of ways that people are treated unfairly in our society, with advocacy groups to match. With AI advocacy, there may be a different construct, because consumers and their advocacy groups will demand access to the information and process on which the AI system made its decision. AI advocacy will provide empowerment, but it also may drive significant debates between AI experts to interpret and triage the data, model development process, and implementation.

AI advocacy will move from being a radical idea to a commonplace function of the growth of AI adoption.


Industry leaders hold a negative, blanket-view that government regulation is an innovation inhibitor. This couldn’t be further from the truth with AI. Regulators and legislators are trying to protect consumers from the negative effects of technology (in reality, the human creators that misuse AI/machine learning) through vehicles such as the EU’s GDPR, California’s CCPA, and other regulations. However, often, demands are being made of technology about which there is little understanding.

Granted, at the opposite end of the scale, there are the companies that clasp their metaphorical hands and say, “We are ethical; we will do no evil with your data.” But, without a standard of accountability, we’ll never know for sure if this is actually the case. For both extremes (and those in between), we will see the rise of international standards to define a framework for safe and trusted AI in 2020 because regulation keeps companies honest. Hopefully, we’ll also see AI experts support and drive regulation of the industry, ensuring fairness and inculcating responsibility.


As AI grows to be a pervasive technology, there is little trust in the morals and ethics of many companies that use it. As a data scientist, that is disheartening. However, as modern society discovers more about the damage that can be done by misuse of AI, it’s clear that experience—not the mantra of “move fast and break things”—matters. It’s time for AI to grow up in 2020.


Subscribe to Big Data Quarterly E-Edition