MapR Technologies, Inc., a provider of a data platform for AI and Analytics, is introducing new innovations in the MapR Data Platform.
The new updates accelerate the compute journey with new, deep integrations with Kubernetes core components for primary workloads on Spark and Drill.
These innovations make it easy to better manage highly elastic workloads while also facilitating in-time deployments and the ability to separately scale compute and storage.
Organizations restructuring their applications or building next-generation real time data lakes will benefit from these new capabilities in a Kubernetes model, with Spark and Drill, by easily leveraging the elasticity and agility of such clusters.
In early 2019, MapR enabled persistent storage for compute running in Kubernetes-managed containers through a CSI compliant volume driver plugin.
With this announcement, MapR further expands its portfolio of features and allows the deployment of Spark and Drill as compute containers orchestrated by Kubernetes.
This deployment model allows end users including data engineers to run compute workloads in a Kubernetes cluster that is independent of where the data is stored or managed.
Features in this release include:
- Tenant Operator
- Spark Job Operator
- Drill Operator
- CSI Driver Operator
“MapR is paving the way for enterprise organizations to easily do two key things: start separating compute and storage and quickly embrace Kubernetes when running analytical AI/ML apps,” said Suresh Ollala, SVP Engineering, MapR. “Deep integration with Kubernetes core components, like operators and namespaces, allows us to define multiple tenants with resource isolation and limits, all running on the same MapR platform. This is a significant enabler for not only applications that need the flexibility and elasticity but also for apps that need to move back and forth from the cloud.”
For more information about this release, visit https://mapr.com/.