Newsletters




Protect AI Debuts Open Source Solutions for Securing AI/ML Applications


Protect AI, the AI and machine learning (ML) security company, is debuting a series of open source software (OSS) tools that aid in enhancing the security of AI and ML environments. The three announced tools—NB Defense, ModelScan, and Rebuff—address the ongoing absence of security-based open source tooling, according to Protect AI.

While it’s clear why an organization may opt for open source tooling, it certainly comes with a few risks—namely in the realm of security. In the specific case of AI and ML applications, this lack of open source security is felt significantly hard, especially considering the recent proliferation of new AI and ML tools, according to the company.

Protect AI’s latest releases aim to fill the apparent gap in securing the AI/ML supply chain, where all three solutions can be implemented as stand-alone tools or integrated with the full Protect AI Platform—a platform designed to provide visibility, audibility, and security for ML systems.

“Most organizations don't know where to start when it comes to securing their ML systems and AI applications,” said Ian Swanson, CEO of Protect AI. “By making NB Defense, Rebuff, and ModelScan available to anyone as permissive open source projects, our goal is to raise awareness for the need to make AI safer and provide tools organizations can start using immediately to protect their AI/ML applications.”

NB Defense is a JupyterLab Extension and CLI tool that scans the notebooks and/or projects of Jupyter Notebooks, an interactive web application for creating and sharing computational documents.

These notebooks often serve as the launching point for model experimentation for many data scientists; NB Defense works to detect leaked credentials, personally identifiable information (PII) disclosure, licensing issues, and security vulnerabilities within these documents. 

ModelScan works to secure ML models that are shared over the internet through code vulnerability scanning. These models—often at risk of a Model Serialization Attack, or malicious code being added during the process of export—are fed through ModelScan to determine if the models harbor unsafe code.

ModelScan supports several formats, including H5, Pickle, and SavedModel, which ultimately protects users when operating with PyTorch, TensorFlow, Keras, Sklearn, and XGBoost.

Finally, Protect AI’s Rebuff—a result of the project’s acquisition by the company in July of this year—injects security into LLM implementation. LLMs are vulnerable to prompt injection (PI), an attack method that targets LLM applications, manipulating model output, exposing sensitive data, and allowing unauthorized actions.

With Rebuff, users can employ a self-hardening prompt injection detection framework that aids in protecting AI apps from PI attacks, applying four layers of defense. These include:

  • Heuristics to detect potentially malicious input prior to reaching the model
  • An LLM engineered to analyze incoming prompts and surface potential attack insights
  • A database of known attacks that enables Rebuff to detect patterns and prevent similar attacks
  • Canary tokens that modify prompts to detect leakages, allowing the framework to store new embeddings for the malicious prompt back into the vector database, preventing future attacks

To learn about Protect AI’s latest open source offerings, please visit https://protectai.com/.


Sponsors