Newsletters




HPE and NVIDIA Boost Partnership to Support the Developer AI Lifecycle


HPE Private Cloud AI is adding feature branch model updates from NVIDIA AI Enterprise, which include AI frameworks, NVIDIA NIM microservices for pre-trained models, and SDKs—further aiding developers.

According to the vendors, feature branch model support will allow developers to test and validate software features and optimizations for AI workloads.

In combination with existing support of production branch models that feature built-in guardrails, HPE Private Cloud AI will enable businesses of every size to build developer systems and scale to production-ready agentic and generative AI (GenAI) applications while adopting a safe, multi-layered approach across the enterprise.

HPE Private Cloud AI, a full-stack solution for agentic and GenAI workloads, will also support the NVIDIA Enterprise AI Factory validated design.

HPE Private Cloud AI is a turnkey, cloud-based AI factory co-developed with NVIDIA that includes a dedicated developer solution that helps customers proliferate unified AI strategies across the business, enabling more profitable workloads and significantly reducing risk.

Additionally, HPE Alletra Storage MP X10000 will introduce an SDK which works with the NVIDIA AI Data Platform reference design, connecting HPE’s newest data platform with NVIDIA’s customizable reference design. This will offer customers accelerated performance and intelligent pipeline orchestration to enable agentic AI.

As a part of HPE’s growing data intelligence strategy, the new X10000 SDK enables the integration of context-rich, AI-ready data directly into the NVIDIA AI ecosystem.

This empowers enterprises to streamline unstructured data pipelines for ingestion, inference, training, and continuous learning across NVIDIA-accelerated infrastructure, HPE said.

Primary benefits of the SDK integration include:

  • Unlocking data value through flexible inline data processing, vector indexing, metadata enrichment, and data management.
  • Driving efficiency with remote direct memory access (RDMA) transfers between GPU memory, system memory, and the X10000 to accelerate the data path with the NVIDIA AI Data Platform.
  • Right-sizing deployments with modular, composable building blocks of the X10000, enabling customers to scale capacity and performance independently to align with workload requirements.

Customers will be able to use raw enterprise data to inform agentic AI applications and tools by seamlessly unifying storage and intelligence layers through RDMA transfers.

Together, HPE is working with NVIDIA to enable a new era of real-time, intelligent data access for customers from the edge to the core to the cloud, the vendors said.

For more information about this news, visit www.hpe.com.


Sponsors