Unlocking the Proprietary Potential of GenAI and LLMs

The applications for generative AI (GenAI) are seemingly endless, from identifying patterns in data to rapidly creating human-like content. The push for GenAI implementation is equally infinite, where enterprises are accelerating their AI initiatives across nearly every industry. While many execs are heavily leaning on GenAI success, its actual implementation and execution remains less than fruitful. Enterprises will need to realistically examine their own goals for GenAI, as well as its ample challenges—including legacy infrastructure issues, security risks, data quality, and more—to manifest its proprietary potential.

Experts joined DBTA’s webinar, Succeeding with Generative AI and LLMs in the Enterprise, to discuss how organizations can effectively adopt GenAI initiatives with key technologies, strategies, and best practices.

Julian Forero, AI product marketing lead at Snowflake, explained that “the most important thing is your data'' and its security. He posed two questions: “How do enterprises make sure they keep that data secure? And more importantly, as they start using that data against LLMs…how do you also keep those models secure?”

With the Snowflake platform, organizations can use AI in everyday analytics within seconds—without the need for extensive expertise in the field—as well as build and deploy AI apps in as little as minutes. This is done with a robust foundation to safeguard enterprise IPs from unintended use with role-based access definitions on data, models, and apps in Snowflake.

Making AI easy to build, scalable, and secure is the primary focus for Snowflake in regard to GenAI and LLMs, according to Forero. At its core, Snowflake Cortex—an intelligent, fully managed service that serves industry-leading LLMs and vector functions—works to empower enterprises with access to LLMs, vector search, and more.

Some of the GenAI use cases applicable with Snowflake include:

  • Unstructured data processing/generation, augmenting data used for analytics and ML models with AI-powered data processing
  • Knowledgebase chatbots for finding answers in documents or other text data
  • Multi-step logical reasoning engines such as a customer service agent or BI assistant, translating english to SQL

Additionally, Snowflake offers Snowflake Copilot, an LLM-powered assistant that brings the power of GenAI to everyday Snowday coding tasks. It generates SQL from natural language and refines queries through conversation, ultimately enhancing user productivity.

Tim Padilla, director, sales and consulting, North America at Datavid, argued that knowledge—and a knowledge management platform—is essential to solving complex problems for organizations.

“Turning our information into knowledge with modern tools allows us to embed our experience and gain insight that was previously buried in the data,” said Padilla.

To leverage GenAI and LLMs as the modern tools for enterprises to solve those problems, Padilla explained that vetting AI technology through “Problem Framing” is an essential beginning step.

“AI is technology like any other; it has unique considerations for implementation, use, and governance,” Padilla explained.

To vet AI tech, enterprises should:

  • Have a business goal that is in context with your current processes—start with something that you can understand and measure.
  • Have a way to use the technology, e.g., an app that can benefit from enhanced search or a generative solution that reduces research time for authoring.
  • Frame/reframe the problem adopting “how might we” questions to enhance the competence of AI answers.

Padilla then offered an example of a unified sample architecture using retrieval augmented generation (RAG), which implements a data hub—responsible for semantically tagging private data—knowledge graphs, orchestration, user queries, and finally, GenAI, which will use semantic knowledge derived from the relevant private data to generate an answer to a user query.

Dmitri Ryssev, sales engineer at Dataiku, focused his discussion on enabling AI at scale through collaboration.

Dataiku offers an end-to-end platform for building and deploying enterprise-grade GenAI apps that augment the intelligence of operations and processes. With plug-and-play solutions for common use cases, full, enterprise-grade integrations for data and compute infrastructures, and insights and predictions for traditional analytics and ML, Dataiku empowers enterprises to rapidly get started with GenAI implementation without sacrificing on quality or security.

Dataiku’s LLM Mesh architecture allows customers to connect to different models and providers, optimizing costs, security, and performance, while maintaining choice and agility. This is all centralized within a single, governed platform that allows enterprises to ensure that standards are enforced for deploying safe, reliable applications.

The LLM Mesh decouples the application and AI service layers so that end users can develop their apps, RAG pipelines, chatbots, and other GenAI use cases without having to worry about connecting to various API providers or hosting their own local models. This is all taken care of in the background by Dataiku, allowing users to focus on the business logic of their GenAI initiatives. The LLM Mesh further offers:

  • Secure gateway enforcement
  • Safe use by defining filters for queries and responses, including PII detection
  • Cost controls
  • Query and response enrichment

For an in-depth discussion of tips for successful GenAI and LLM implementation, you can view an archived version of the webinar here.