Is Transparency the New T&C?


Ah, those comprehensive, yet amazingly unclear terms and conditions (T&C). You know the ones. They include a minimum of 10 pages of scrolling text detailing the company’s rights and obligations. Of course, the critical bits regarding your data or rights are beyond the point at which even young eyes go blurry.

Or, there’s my recent favorite: cookie consents that provide an easy “accept all” button front and center but force you to navigate a multitude of webpages before painstakingly allowing you to reject multiple nested options one by one.

The newest iteration of this phenomenon? Transparency. Transparency is a core pillar of most responsible tech frameworks and emerging regulations. When done well, transparency ensures a system has been thoughtfully, thoroughly vetted for the uses to which it is intended to be applied. Indeed, the proper usage and predictable misuses are identified. Transparency provides context about why particular guardrails are in place and ensures the subjects of these systems are aware of their intended benefits and their failings.

Transparency should not be—in my view, at least—a simple method to shift responsibility and risk. This is, however, the recent trend. So-called “transparency” is becoming a go-to mechanism to provide legal and reputational protection under the cover of plausible deniability. A means to kick the can down the road, to experiment at scale, to call for regulation while eschewing self-discipline, to shift accountability to users who can’t be reasonably expected to critically interrogate our claims. Even—perhaps, especially—when the systems are designed to obfuscate the very risks and limitations the creator has so transparently revealed.

Foundational LLM (large language models) and the systems they underpin (ChatGPT and its ilk) are obvious targets for this musing. They are increasingly hailed and harnessed as knowledge management tools, despite being language generators entirely untethered from a requirement of truth or reality. Their outputs aren’t always truthful or factual. Nor should they be expected to be: That isn’t how the system works.

Yet, as these beguiling AI-enabled systems proliferate, transparency is becoming the new iteration of the trite T&C. Cue PR blitzes about how these powerful technologies require regulation but make them widely available with no limitations on their use and without disclosing their data sources. Publish well-designed model cards highlighting all the ways in which the systems are unreliable. Yet, at the same time, amplify conversations exaggerating the system’s understanding. Note again, in small print, that all outputs require robust ex-parte verification. Provide API by which these foundational, yet reliably unreliable, models can be embedded in other systems. Many of which then claim to be sources of proven, concrete knowledge rather than content generators or ideation engines. Note that the time required to build more robust guardrails was untenable.

Design the systems to mimic human engagement complete with first-person retorts, bashful apologies, and emoticons. All which purport to make the systems easier to engage, but which, in reality, play on our innate tendency to trust something that sounds human. Which brings us to another question: Are we also confusing human-centric design principles with a need to be human-like? More on that to come.

Transparency about an AI system’s capabilities (or lack thereof) alone is not sufficient for responsible, sustainable innovation, especially if a system’s design, availability, and PR expressly downplay germane risks and limitations. Transparency is an integral part of a responsible innovation framework, including, among other things, non-deceptive design and self-regulation. If transparency, in its current polished and oft-insidious variations, becomes the new AI governance T&C, there is cause for great concern. Likewise, if transparency becomes the penultimate AI governance principle, alarms should raise. Not because these AI systems aren’t immensely impressive and useful, but because they are.



Newsletters

Subscribe to Big Data Quarterly E-Edition