When Risk Minimization Is Risky


The concept of risk is more complex than it is typically perceived. When Mark Zuckerberg said that “the biggest risk is not taking any risk,” he was referring to the age-old risk versus innovation dynamic. Innovation certainly requires some level of risk, but is it possible that minimizing risk can actually lead to more risk?

To grasp this apparent paradox, one might consider that it is impossible to eliminate all risk from our world. No one lives forever, and Earth itself is subject to epochal changes that we are often unable to predict or control. In 2020, regulations were put in place to minimize the sulfur dioxide pollution from container ships, but sulfur dioxide reflects sunlight, and so its decline has led to ocean warming. Minimizing one type of risk often increases another risk.

But in corporate bureaucracies, it is easy to justify a new process or tool to tamp down on risk, because the threat of risk is always immediate and clear. Downsides to a risk minimization strategy are often speculative, conceptual, and distal and therefore less likely to impact near-term, anxiety-based decisions.

For example, in the U.S. hiring and employee selection space, it’s illegal to discriminate against candidates who are members of protected classes. These include race, color, religion, sexual orientation, national origin, and more. If a discrimination claim is made against an employer by a candidate, the hiring organization could be required to produce documentation showing the levels of disparate impact in its processes. Those records, in other words, are discoverable.

Rather than being required to produce records that could show evidence of disparate impact, many corporate attorneys will simply direct their organizations to avoid looking for it in the first place. That is, they will seek to minimize the legal risk of disclosing discrimination by instructing their organization to avoid checking for discrimination.

In this case, chances are that if a company is not monitoring bias levels in its selection processes, bias is most likely present. The reason for this is that bias is insidious, and the only way to root it out is to continually monitor for it and adjust accordingly.

Developing continuous monitoring processes of AI hiring tools or algorithms to ensure that a company’s hiring is not discriminatory is harder than simply electing not to address it altogether. The latter may minimize near-term exposure; however, it can have damaging consequences for a business that doesn’t have the data validation to prove that its tools and processes are free of bias. Hiding likely evidence of bias allows systemic racism to flourish, and that is an enormously larger risk to an organization than a biased selection tool that could be easily fixed.

Another prominent example is in the financial services industry. In 2016, Wells Fargo faced significant compliance issues when it was revealed that employees had created millions of unauthorized bank and credit card accounts without customers’ knowledge.

Wells Fargo’s internal systems and data monitoring processes failed to detect or prevent the fraudulent activities, leading to a major compliance breach and substantial damage to the bank’s reputation.

The bank’s lack of comprehensive data visibility and control underscores the critical importance of having robust data management and monitoring systems in place to ensure compliance and prevent fraudulent activities.

In both examples, overwrought risk strategies (or lack of them) resulted in a larger, long-term existential threat.

AI Is Both Risky and Imperative

In the era of AI, the risk versus innovation dilemma is ever present, and the only solution is a nuanced, balanced approach that threads a needle between both concerns.

In the lifetime of anyone reading this, and possibly anyone who ever lived, there has not been a technology as disruptive as AI. Entire industries are being decimated overnight as big tech companies release new features of their core products. As Elon Musk stated, AI will replace all human jobs. Less hyperbolically, in AI Superpowers: China, Silicon Valley, and the New World Order, Kai-Fu Lee writes that as AI advances, it will rob humans of meaningful work and lead to a “psychological loss of one’s purpose.”

Most businesses and other organizations do not have the luxury of ignoring how AI can alter and improve their operations if they wish to remain solvent and relevant in the near future.

At the same time, AI is a confusing enough array of capabilities with a serious set of risks and limitations, and it is difficult to know exactly how (and how not) to apply it to specific operational tasks.

As with most large technological advances, the world is struggling to harness AI with various laws and regulations. There are literally thousands of pieces of emerging legislation around the world, with more than 80 in the U.S. Congress alone at the moment. Even the large tech firms often call for regulation, though some critics believe this is just a strategy to increase their span of control since these organizations are more likely to be able to comply with regulations than smaller startups and competitors.

But just as there are great environmental benefits to nuclear power, no one would suggest that nuclear power plants should not be made to conform to stringent safety regulations. AI is no different. Left to the devices of for-profit companies, AI will ultimately harm humanity.

Most emergent AI legislation will not reduce innovation, as it is quite minimal in its requirements. For example, proposed rules often require deployers of AI tech to monitor their tools for bias against protected classes, conduct impact analyses, and ensure that individual data privacy is not violated. Are these onerous standards? On the contrary: Any powerful tool such as AI should come with basic monitoring capabilities to ensure it doesn’t do harm to individuals.

From a business perspective, complying with AI legislation is not enough. If an organization ensures its AI implementations do not violate individual privacy and are fair to all classes, this doesn’t prove that the use of the technology is effective. Laws rarely regulate effectiveness. Therefore, it’s left to the buyer’s own risk. A hiring manager can flip a coin to decide which candidate to hire, and this will not violate any employment laws, yet that’s a random hiring process left to chance. Organizations today seek greater assurance that their investments, especially when it comes to AI and data analytics, will yield positive results.

The solution to innovating with AI and doing so safely is rigorous, requiring continuous monitoring. Nuclear power control systems do not take breaks, and neither should AI control systems. Algorithmic audits can be useful point-in-time evaluations, but these are still inadequate compared to continual quality assurance processes. And while it is important to understand exactly how AI works, it is more vital and immediate for a business to understand its impacts. They don’t necessarily need to know how the models were built.

Of course, not every implementation of AI is risky—the EU Artificial Intelligence Act specifies four levels of risk: minimal, limited, high, and unacceptable. Using AI to make a toaster slightly better at not burning bread won’t rise to threat level Delta if it works more reliably with white bread than pumpernickel.

But a high-risk application of AI used in a hiring assessment video that is more accurate when interviewing a white candidate than a Black candidate is a different story.

Another dimension to consider when assessing risk is time. The safest thing a company can do right now may well be nothing. But doing nothing indefinitely results in, well, nothing. Organizations can avoid running potentially damning analytics on their business, so they won’t have to deal with and disclose the results.

They can overfit their risk prediction models to make themselves feel better about their span of control and reject new approaches that threaten their core business processes. But doing so won’t change reality. And in an age of massively disruptive technology, those decisions would be risky, indeed.



Newsletters

Subscribe to Big Data Quarterly E-Edition