Snowflake Arctic Sets a New Bar for Open Source, Enterprise-Grade LLMs

Snowflake, the Data Cloud company, is debuting a state-of-the-art large language model (LLM) specifically designed to be the most open, enterprise-grade LLM available today: Snowflake Arctic. This innovation from Snowflake leverages a Mixture-of-Experts (MoE) architecture—designed by a composition of industry-leading researchers—to deliver advanced intelligence and efficiency at scale.

Snowflake Arctic propels the industry-wide movement toward adopting open source LLMs for generative AI (GenAI) implementation. As a truly open model with an Apache 2.0 license, Snowflake Arctic allows for ungated personal, research, and commercial use, further providing code templates and flexible inference and training options for customized deployments with users’ framework of choice—including NVIDIA NIM with NVIDIA TensorRT-LLM, vLLM, and Hugging Face.

“There has been a massive wave of open source AI in the past few months,” said Clement Delangue, CEO and co-founder of Hugging Face. “We're excited to see Snowflake contributing significantly with this release not only of the model with an Apache 2.0 license but also with details on how it was trained. It gives the necessary transparency and control for enterprises to build AI and for the field as a whole to break new grounds.”

The LLM’s MoE architecture aims to improve the nature of its training systems as well as its model performance, amplified by a data composition that focuses specifically on enterprise needs. Snowflake’s diverse AI research team, paired with the model’s training using Amazon Elastic Compute Cloud (Amazon EC2) P5 instances, Snowflake Arctic is positioned to establish a new baseline for the training speed of state-of-the-art, open, enterprise-grade models, according to the company.

“Snowflake and AWS are aligned in the belief that generative AI will transform virtually every customer experience we know,” said David Brown, vice president of compute and networking at AWS. "With AWS, Snowflake was able to customize its infrastructure to accelerate time-to-market for training Snowflake Arctic. Using Amazon EC2 P5 instances with Snowflake’s efficient training system and model architecture co-design, Snowflake was able to quickly develop and deliver a new, enterprise-grade model to customers. And with plans to make Snowflake Arctic available on AWS, customers will have greater choice to leverage powerful AI technology to accelerate their transformation.”

As part of the Snowflake Arctic model family, the Snowflake Arctic LLM, when accessed through Snowflake Cortex, can accelerate users’ abilities to construct production-grade AI apps at scale—with the benefits of the Data Cloud’s security and governance perimeter.

“This is a watershed moment for Snowflake, with our AI research team innovating at the forefront of AI,” said Sridhar Ramaswamy, CEO of Snowflake. “By delivering industry-leading intelligence and efficiency in a truly open way to the AI community, we are furthering the frontiers of what open source AI can do. Our research with Arctic will significantly enhance our capability to deliver reliable, efficient AI to our customers.”

In addition to the launch of the Arctic LLM, Snowflake is debuting Arctic embed, a set of embedding models available for use by the open source community under an Apache 2.0 license. According to the company, the announced embedding models are engineered to deliver leading retrieval performance, offering enterprises a robust, cost-conscious solution for combining proprietary datasets with LLMs.

The embedding models are immediately available on Hugging Face and will soon be released as part of the Snowflake Cortex embed function.

To learn more about Snowflake Arctic, please visit