Newsletters




Sponsored Content: AI Can Give You Answers Fast – Accuracy May Take Longer

<< back Page 3 of 3

The “Lethal Trifecta”

LLMs do not separate data from instruction. When LLMs are exposed to external data and can communicate with the outside world via tools, once they are permitted to access private data or document repositories, a security risk that Simon Willison calls the “lethal trifecta” becomes a hazard. Recent work has begun to highlight the dangers posed by LLMs that meet these three conditions, as well as the ability of LLM agents to misuse tools and facilitate attacks. As tool-use capabilities become pervasive in AI products, the need to detect and mitigate risks increases.

Adopting a Comprehensive Data Strategy for AI

Before implementing AI solutions, companies must establish robust data management practices, encompassing data governance, security, and accessibility. Collectively, these concepts fall neatly into the popular “CIA triad”: confidentiality, integrity, and availability.

Data Hygiene as a Foundation

Secure, clean, and accurate data is essential for trustworthy AI results. Data hygiene involves verifying the accuracy of records, correcting errors, eliminating duplicates, and ensuring that information is current and complete. Without this foundation, AI systems will amplify existing data problems.

Governance and Control

Companies need clear policies about data quality, including who's responsible for maintaining it, how errors are identified and corrected, and what standards must be met before data enters AI systems. This governance framework should cover both internal company data and any external sources used by AI tools.

Transparent Data Lineage

Organizations must be able to trace where data comes from and how it has been processed. When AI provides a result, users should be able to understand what sources informed that output. No serious manager should accept data of unknown origin, especially for critical business decisions.

Specialized Tools and Human Oversight

Rather than relying on general-purpose AI for everything, companies should consider specialized tools designed for specific tasks. For example, some systems are built specifically to identify conspiracy theories or check for plagiarism. Amazon uses one AI system to generate content and another, trained on different information, to review it. There is an emerging trend toward developing smaller, more specialized LLMs and SLMs designed to address specific tasks as opposed to the larger general purpose LLMs behind ChatGPT and others.

Most importantly, AI should supplement human expertise, not replace it. The technology excels at gathering and organizing information but still requires human judgment to assess quality, relevance, and appropriateness. According to a paper published earlier this year by Arvind Narayanan and Sayash Kapoor, two computer scientists at Princeton University, we should view AI as a “normal” technology when integrated into other systems, and it works best with human oversight. They argue that “without human oversight, AI may be ‘too error-prone to make business sense’”.

Apply Common Sense

AI systems lack the basic reasoning abilities that humans take for granted. They can't assess whether information makes logical sense or conflicts with established facts. This means humans must remain actively involved in reviewing and validating AI outputs, especially for critical business decisions.

Conclusion: Proceed with Informed Caution

Artificial intelligence represents a genuinely useful technology with significant economic potential. When used appropriately, it provides substantial support for various types of work, particularly tasks that involve searching, summarizing, and synthesizing information. However, the current generation of AI tools is not the revolutionary game-changer that some claim.

The key to successful AI adoption lies in understanding both its capabilities and limitations. Companies should approach AI implementation with the same rigor they would apply to any significant technology investment: careful evaluation, clear objectives, proper safeguards, and realistic expectations.

The most significant risk may not be that AI fails to deliver promised benefits, but that organizations become overly dependent on systems they don't fully understand or trust. Before betting their business on AI-generated insights, companies must invest in  data quality, governance frameworks, and human expertise needed to use these tools effectively.

The printing press, despite initial skepticism, ultimately revolutionized human communication and knowledge sharing. AI may well prove similarly transformative—but like the printing press, its true value will emerge only when we learn to use it wisely, with appropriate safeguards and realistic expectations about what it can and cannot do.

The future belongs not to companies that adopt AI fastest, but to those that adopt it most thoughtfully and continue to adapt to changes. Because once the problem of AI data has been mitigated, there will remain the risks AI poses to personal privacy, cybersecurity, the environment, and competition. Not to mention that at some point, AI investors may want to know when and how they will see returns on the billions of dollars they have already invested. 

Sources Cited

  1. Wolff, William. "In Praise of Scribes." Trithemius Scribes, 2009, williamwolff.org/wp-content/uploads/2009/06/TrithemiusScribes.pdf.
  2. Vosoughi, Soroush, et al. "The Spread of True and False News Online." Science, vol. 359, no. 6380, 9 Mar. 2018, pp. 1146-1151.
  3. Thomke, Stefan, et al. "Addressing Gen AI's Quality-Control Problem: What Amazon Learned When It Automated the Creation of Product Pages." Harvard Business Review, Sept.-Oct. 2025, pp. 60-67.
  4. Shroff, Lila. "ChatGPT Gave Instructions for Murder, Self-Mutilation, and Devil Worship: OpenAI's Chatbot Also Said 'Hail Satan.'" The Atlantic, July 2025, www.theatlantic.com/technology/archive/2025/07/chatgpt-ai-self-mutilation-satanism/683649/.
  5. Metz, Cade. "Anthropic Agrees to Pay $1.5 Billion to Settle Lawsuit With Book Authors." The New York Times, 5 Sept. 2025, www.nytimes.com/2025/09/05/technology/anthropic-settlement-copyright-ai.html.
  6. Meaden, James, et al. "COMPASS: A Multi-Dimensional Benchmark for Evaluating Code Generation in Large Language Models." arXiv, 19 Aug. 2025, arxiv.org/pdf/2508.13757v1.
  7. The Economist. “What If Artificial Intelligence Is Just a ‘Normal’ Technology?” The Economist, September 4th-12th 2025, https://www.economist.com/finance-and-economics/2025/09/04/what-if-artificial-intelligence-is-just-a-normal-technology.
  8. - - -, The Economist. “‘Bad Things Come in Threes.’” The Economist, September 27th 2025.
  9. Whittaker, Meredith, “By Invitation: The world must wake up to the threats AI agents pose to privacy, cyber-security — and even competition,” The Economist, September 13th-19th 2025.
  10. Xinyi Hou, Yanjie Zhao, Shenao Wang, and Haoyu Wang. 2025. “Model Context Protocol (MCP): Landscape, Security Threats, and Future Research Directions”, 06 April 2025.. https://arxiv.org/html/2503.23278v2
  11. Liu, Zhiwei, et al. "ConspEmoLLM: Conspiracy Theory Detection Using an Emotion-Based Large Language Model.", 12 August 2024, arxiv.org/pdf/2403.06765.
  12. "What is Data Hygiene?" Cognizant, www.cognizant.com/us/en/glossary/data-hygiene.
  13. Zacharias, Melody. "Improve Data Hygiene, Overcome the AI 'GIGO' Problem." Pure Storage Blog, blog.purestorage.com/perspectives/dirty-data-got-your-ai-models-down-heres-how-to-improve-data-hygiene/.
  14. "What is Data Governance?" Google Cloud, cloud.google.com/learn/what-is-data-governance.
  15. "About Data Lineage." Google Cloud, cloud.google.com/dataplex/docs/about-data-lineage.
<< back Page 3 of 3

Sponsors