JFrog Unveils Integration with Amazon SageMaker, Uniting Developers, DevSecOps, and Data Scientists

JFrog, the Liquid Software company and creator of the JFrog Software Supply Chain Platform, is unveiling a new integration with Amazon SageMaker—a solution that helps organizations build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows. This integration will leverage the power of Amazon SageMaker with that of the JFrog Artifactory—an artifact management tool for flexible development and trusted delivery at any scale—uniting software development process and machine learning management.

This integration enables enterprises to deliver ML models in tandem with all other software development components, in turn, making each model immutable, traceable, secure, and validated as it matures, according to the companies.

“Bringing these worlds together represents significant progress towards harmonizing machine learning pipelines with established software development lifecycles and best practices,” said  Sean Pratt, senior DevOps platform evangelist at JFrog. “Machine Learning models are large, complex binaries consisting of multiple parts—and just as it is for other common software components and artifacts, Artifactory is an ideal place to host, manage, version, trace, and secure models.”

Uniting ML and software development delivery enables enterprises to create a single source of truth for both data scientists and developers alike, ultimately ensuring models are accessible, traceable, and tamper-proof. This integration also allows enterprises to:

  • Develop, train, secure, and deploy ML models
  • Identify and block the use of malicious ML models
  • Scan ML model licenses to maintain adherence to company policy and regulatory requirements
  • Store home-grown or internally augmented ML models with extensive access controls and versioning history
  • Bundle and deliver ML models within any software release

“This integration…[will] make the world of software development much more efficient, thorough, automated, secure, and compliant, which is critical in today’s increasing threat landscape where software supply chain attacks are on the rise,” Pratt added.

In addition to this latest integration, JFrog is debuting new versioning features for its ML Model Management solution, driving greater transparency around each model version. This enables developers, DevOps teams, and data scientists to maintain the accuracy and security of ML models being used.

“With all signs indicating that the use of ML models will only continue to increase in the future, it’s imperative that DevOps and Security practitioners act to serve the MLOps needs of their organizations,” said Pratt. “JFrog’s ML Model Management makes it easy for DevOps and security teams to leverage their existing JFrog solution to meet their organization’s MLOps needs, integrating seamlessly into the workflows of ML Engineers and Data Scientists. In doing so, organizations can apply their existing practices and policies toward ML model development, extending their secure software supply chain.”

To learn more about JFrog’s latest integration and solution updates, please visit