Generative AI (GenAI) is having an Agile moment or two, but this is not a positive development. One aspect of this moment in time is the continued hyping of GenAI’s unlimited agility—wherein GenAI is trumpeted as something akin to an analytic Swiss Army knife. The analogy is not only problematic, but it is actively harmful.
GenAI, like any other tool, creates value when applied discriminately. Not when, in the haste to realize GenAI’s purportedly unlimited potential, teams fail to ask critical questions—questions that would be asked if any other analytic or AI technique were under consideration, along with questions that would ensure GenAI was applied in genuinely useful ways while avoiding predictable injury, rather than throwing it at every problem in hopes that it sticks.
The current moment is also driven by ill-advised attempts to put a Band-Aid over unstable enterprise foundations with GenAI. Why is this an Agile (with a capital “A”) moment? Many would-be early adopters of Agile development methods lacked the rigor required to go fast. They hoped Agile would prop up effective business/IT collaborations but failed to realize that such collaboration was, instead, a prerequisite for Agile. Organizations with productive stakeholder relationships succeeded with Agile, while organizations with fractious business/IT relationships failed painfully. Many still do as they refuse to acknowledge the foundational capabilities upon which the methodology relies.
There is a similar feeling in the air with GenAI today. There are far too many inflated expectations and far too little appreciation for the fundamentals required to propel GenAI solutions from prognostication to production. While there are many factors at play, the following three have confounded more than their fair share of enterprise programs:
- Inadequate Literacy and Expectation Management
Making your employees more productive is a wonderful aspiration. It is not a problem that can be solved without elucidation of the substantive issues at hand. Yet, more often than we like to admit, objectives for GenAI programs are positioned in exactly this way.
Mix in unrealistic hypering and a lack of frank discourse regarding the current capabilities and limitations of GenAI tools, and we have a perfect recipe for disillusionment. It is no surprise then that so many subsequently fail to launch or fall prey to the purgatory of perpetual pilots.
- Shaky Data and Content Management Foundations
GenAI tools generate content, and that content can be textual, graphical, and/or audible. What gets generated is directly influenced by what data the model was trained on, as well as the inputs provided in the generational request (aka the prompt).
For GenAI to be effective, your data house must be in order. The expectation that GenAI can be applied across a disorganized, disconnected, and unregulated sea of data with impunity is just wrong. The old adage “garbage in, garbage out” still applies. The design of large language models (LLMs) also raises a more disturbing specter: “garbage in, gospel out.”
Moreover, the ability to deploy techniques such as retrieval-augmented generation (RAG) to improve the quality of generated content depends on access to readily accessible, well-catalogued data assets.
Last, but certainly not least, deploying GenAI can increase both the quantity and diversity of content generated by the organization. If your content management lifecycle is already under strain (or does not exist), this increase in output will amplify content-related liabilities and hamper rather than increase productivity.
- Non-Correlated or Surface-Level Risk Assessments
GenAI tools and large language models (LLM) produce errors routinely. Call them hallucinations, confabulations, or BS; they are a feature, and not a bug, of these AI techniques. While methods such as RAG can reduce the incidence of errors, they cannot eliminate them. Often downplayed, this adds to the familiar litany of risks algorithmic systems pose.
Therefore, deploying a solution incorporating GenAI (of any ilk) requires a steely-eyed acknowledgment and assessment of risk. Understanding your organization’s risk adversity or tolerance is one element. Understanding the error tolerance of users, customers, or those you serve is another. The need to reconcile the uneasy tensions between these points of view is not unique to GenAI. It does, however, continue to get the short shrift as hopeful aspirations of what GenAI could be outshine the realities of what it is today.
There is a rising tide of press and experiential failures questioning both the short- and long-term return on GenAI. Many, at the enterprise level, resulted from inflated expectations, poor operational readiness, and an inadequate understanding of risk.
All of which is not to suggest that GenAI should not be a part of your organization’s analytic arsenal. It can be as long as your enterprise attends to the foundational elements mentioned above.