With AI and Data Solutions, Ask Why But Also: Why Not?

Author and inspirational figure Simon Sinek has ably demonstrated that great leaders (and leading companies) “start with why.” No argument there. But validating whether your why is authentically and appropriately reflected in the solutions you deploy requires a follow-up question, namely: Why not?

Humans are notoriously protective creators. We naturally, and with the full conviction of our good intent, eloquently defend the “why” or—more often—the “how” of the solutions we create zealously. But, as stories of unintended consequences and harms accumulate, the need for mindful critique has become self-evident.

AI and other data-driven solutions don’t just learn at scale; they execute at scale, thereby amplifying small errors or biases and reinforcing patterns of behavior. Ensuring data-driven solutions are healthy, safe, productive, and just requires the creators of AI solutions to become their own worst critics.

To be sure, governance and criticism get a bad rap. Governance has become synonymous with rigid control and constraints; criticism, with negativity alone. Both terms invoke a means by which to stifle innovation or stall progress. Yet, this is not the objective of governance or criticism.

As author Max Florschutz highlighted in a July 20, 2020, treatise on literacy criticism, being our own worst critic is not about denigrating our creations. A “critic used to be someone who carefully judged the merits of a work” so as “to help the creator improve by knowing where to focus their efforts next.” This, in a happy turn, is the ultimate objective of effective governance.

Ask Questions First

Do your users or customers want what you think they want? Do they believe your products and services are working in their best interest? Do they want to engage with you? Will they trust you? Should they?

In the rush to bring AI and data solutions to bear, don’t guess and don’t just ask, “Why?”; also ask, “Why not?” Consider why this application might not be a good idea, may not lead to our intended outcome, might not be well-received, and might not safeguard human dignity and liberties.

Also think of why a person might not use the application as intended, might not match our expected user profile, may not expect something to work that way or not want to use the solution, and may not trust our intent.

Try to consider why the data might not accurately represent the current state, not reflect a desired future state, not be appropriate to use in this context, not have been intended for this use, or not represent what we think it represents.

Also evaluate why the model might not accurately depict cause and effect, not be optimized for what we intended, not be sustainable, or not lead to a defensible conclusion/action.

Asking ‘Why Not?’

Asking “Why not?” does not undermine the legitimacy of our efforts or call into question our intent. Rather, robust critique improves data-driven products and services. It shifts our point of view and invites others into the fray to challenge our beliefs and assumptions, thereby enabling us to identify and remediate a spectrum of ills, from simple logical fallacies to gross mistakes in judgment.

Are your AI and data-driven solutions part of the problem or the solution? Why not go ahead and ask?


Subscribe to Big Data Quarterly E-Edition