The Flaws in AI: We’ve Seen This Movie Before
Historically, there has been a regular pattern of fear of new technology. Revolutionary technologies have often sparked intense debates, with passionate advocates and detractors on both sides of the issue. The printing press exemplifies this pattern perfectly. Despite being one of history's most transformative innovations, it generated fierce controversy that lasted for centuries.
In 1492, the monk Johannes Trithemius wrote "In Praise of Scribes," arguing that handwriting was morally superior to mechanical printing. Some critics even called the printing press "divine"—remarkable opposition for the fifteenth century.
Gottfried Leibniz: A Case Study in Misinformation
The philosopher Gottfried Leibniz is frequently cited as an early critic of the printing press. A quick internet search—using traditional search engines or AI tools—consistently turns up references to a letter he is said to have written to the King of France in 1690. In this letter, Leibniz allegedly lamented "the horrible mass of books that keeps on growing and which might lead to the fall back into barbarism."
Popular AI chatbots confidently present this quote as a historical fact when asked about early reactions to technological change. Yet this authoritative-sounding attribution reveals a troubling problem: the original source is nearly impossible to find.
When AI Admits Its Limitations
To its credit, when pressed for a source, ChatGPT does acknowledge some uncertainty. The system acknowledges that the famous phrase is "actually a paraphrase drawn from a more elaborate passage" and suggests that the original might appear in Leibniz's 1680 manuscript, "Precepts for Advancing the Sciences and Arts." However, this attribution relies on secondary sources—and at least one of these sources makes no mention of Leibniz at all.
Other AI systems like Claude return similar results, all citing the same questionable references. Even requests for the original Latin text come up empty. Tellingly, Wikipedia consistently ranks high in these search results, despite being a source with notoriously variable academic standards.
The Broader Lesson
This example highlights a fundamental issue with AI-generated information: The speed and confidence with which these systems deliver answers can obscure significant gaps in verification and accuracy. Even the most advanced AI systems have serious limitations that business leaders must understand.
The Garbage In, Garbage Out Problem – Magnified
The old computing adage "garbage in, garbage out" applies doubly to AI systems. Most AI models are trained by ingesting massive amounts of data from public internet sources—terabytes and petabytes of information. This wholesale consumption of online content doesn't inspire confidence in accuracy, especially since much internet content is unreliable, biased, or simply wrong.
Unlike traditional computer systems, where bad data affects one calculation, AI systems can amplify errors across thousands of results. It's no longer just "garbage in, garbage out"—it can become "garbage in, an entire landfill out."
Inconsistent but Authoritative-Sounding Answers
AI systems often provide responses that sound confident and well-informed but lack accuracy. They excel at mimicking the style and tone of expert communication without any actual expertise. This creates a dangerous situation where users may trust information simply because it's presented convincingly.
Hallucinations – Errors by Another Name
The tech industry uses the term "hallucinations" to describe when AI systems confidently present false information as fact. These aren't occasional glitches—they're a fundamental characteristic of how these systems work. AI models predict what text should come next based on patterns in their training data, not on actual knowledge or understanding. The results are presented as if they are based on sound reasoning and citations, but often, the answers sound more like an office bullshitter than the office oracle, and LLMs are not beyond making up citations to support their answers.
Long-Term Maintainability Issues
AI-generated content, whether code, reports, or strategic documents, often works in the short term but creates maintenance challenges later. The output may function initially, but it will be difficult for humans to understand, modify, or improve over time. This is particularly problematic for software code that needs ongoing updates and improvements.
Copyright and Plagiarism Risks
The business models behind many AI companies are questionable at best. It is what we have come to expect from Silicon Valley and are told is required for innovation. The reality is that to create and develop LLMs, AI companies need to ingest vast amounts of text, images, and other content to train their systems, much of it copyrighted, then deal with legal challenges later. Bluntly, LLMs collect and process publicly available but protected data without the owner’s permission or sometimes knowledge (in other words, they steal it). Anthropic admitted as much when it paid $1.5B to settle a lawsuit with book authors in September 2025. While this may be a reasonable strategy for big tech companies, businesses using AI-generated content face real risks of copyright infringement and plagiarism in their outputs. The ethical implications of this theft are staggering by themselves, if somewhat abstract. A suit for copyright infringement because an LLM did not adequately cite its sources is more tangible and costly. Again, it is difficult to identify the ingredients once the sausage is made.