Newsletters




Technologies for Creating Effective, Agile, and Innovative AI and ML


The number of opportunities laden within AI and ML seems to be a gold mine for any enterprise, big or small. The rise of LLMs, knowledge graphs, and MLOps is representative of the massive strides made in ML and AI innovations, shaping any business to be agile and efficient while embracing technologies for repeatable, scalable business value.

DBTA held a webinar, Game-Changing Technologies in Machine Learning and AI, joined by a panel of experts to explore the bustling world of AI and ML solutions that can propel any business into the modern age, an era characterized by the need for business flexibility, innovation, and efficacy.

Julian Forero, product marketing lead at Snowflake, began with the current state of LLMs, where many are trained with internet data and hosted externally. Forero pointed to a presently untapped opportunity to leverage LLMs for greater, more positive business outcomes—training LLMs based on enterprise data.

However, where that LLM is then hosted poses another challenge, according to Forero. While self-hosted LLMs benefit from more rigid privacy and security, it creates a weighty operational burden that suffers in model accessibility. Externally hosted LLMs fail to secure proprietary data, yet reduce operational burden and provide access to third-party models.

Fortunately, Snowflake can drive positive outcomes in all these areas concerning LLM implementation. Snowflake empowers organizations to use LLMs to be smarter about their data and enhance user data, all without compromising security or governance.

Snowflake’s acquisition of Applica and Streamlit emphasizes the company’s focus on enabling safe LLM implementation. With Applica, Snowflake users can leverage a purpose-built, multi-modal LLM for document intelligence, including pdfs, emails, charts, and more. Streamlit provides the ability to build a front-end for enterprise-trained LLMs, providing quick builds of chat applications through Python code. These offerings are all united under Snowflake’s security perimeter, ensuring that data is protected while simultaneously being maximized for its value.

Dan Jones, SVP of product management at KX, focused his discussion on the widely popular topic of generative AI. He argued that the key to achieving business value with generative AI is through the combination of an LLM and temporal data, achievable through KX’s KDB.AI—a vector database and search engine that enables developers to construct scalable, reliable, and real-time applications.

KBD.AI offers a variety of components—such as advanced search, recommendation, and personalization—that supplies robust support for any AI application. It can solve many different AI challenges facing companies, offering ROI imagined as reduced chance for hallucinations, democratization of real-time datasets, and increased trust and regulatory compliance.

Whether the challenge concerns numerical and temporal analysis, real-time data, or explainability and transparency, KBD.AI breaks down each roadblock preventing organizations from unlocking the true value of their AI applications.

Jesse Miller, senior product manager at Monte Carlo Data, offered a quote from Andrew Ng: “The dirty little secret of AI is that it actually isn't really magic. The data inputs and outputs have to be extremely good. If the data is garbage, the output is garbage. If the data is biased, the output is biased. So, you have to put in a lot of effort to collect clean, well-labeled, unbiased data."

These words from Ng centralize the area of focus within AI and ML onto data quality, where many organizations struggle. According to Monte Carlo market research, 30-50% of data engineering time is spent on data quality issues. Furthermore, 80% of data science and analytics teams’ time is spent on collecting, cleaning, and preparing data, according to a report from Crowdflower.

Miller explained that focusing on data quality, as opposed to the model, is considered a data-centric approach to AI. How can this sort of approach to AI be achieved? Though a data observability platform, much like the one offered by Monte Carlo.

Monte Carlo’s Data Observability platform centralizes observability of data lakes, data lakehouses, modeling/orchestration tools, data warehouses, and BI tools to resolve, prevent, and detect data quality issues more effectively. Data observability is the key to data quality, which, in turn, is the key to effective data-centric AI.

For an in-depth discussion of AI and ML technologies, you can view an archived version of the webinar here.


Sponsors