Newsletters




Databricks Model Serving Streamlines Machine Learning Operations for the Lakehouse


Databricks, the lakehouse company, is launching Databricks Model Serving, a solution aiming to streamline the management and scaling of production machine learning (ML) within the Databricks Lakehouse Platform. By using the Lakehouse Platform, Databricks’ customers can deploy real-time ML systems throughout their organization, without modifying the enterprise infrastructure.

Though AI/ML have been continuously increasing in popularity and its potential use cases, many organizations look to its efficiencies as room for innovation. However, the hefty management, design, and maintenance load of ML projects fall on ML experts, creating a tension between an enterprise’s innovative ambitions and its ML experts’ ability to execute dynamic scaling.

Databricks Model Serving eradicates these challenges when it comes to building and implementing ML systems, enabling organizations to access highly available, low latency service modeling. The solution, fully managed by Databricks, allows enterprises to simplistically integrate ML predictions into production workflows backed by adjustable scaling, as needed. According to the company, customers save on operational costs and only pay for compute use. 

“Databricks Model Serving accelerates data science teams’ path to production by simplifying deployments, reducing overhead, and delivering a fully integrated experience directly within the Databricks Lakehouse,” said Patrick Wendell, co-founder and VP of engineering at Databricks. “This offering will let customers deploy far more models with lower time to production, while also lowering the total cost of ownership and the burden of managing complex infrastructure.”

Databricks’ emphasizes its unified, data-centric approach to ML from the lakehouse, which further provides organizations with the ability to embed AI at scale, as well as have models served by the data and ML training platform, according to the company. Users can manage the entire ML process from a single pane of glass within Databricks—from experimentation, to training, to production—greatly simplifying workloads, accelerating deployments, and reducing errors.

The Databricks Model Serving solution integrates with Lakehouse Platform operability, including the feature store, MLflow integration, and Unity Catalog. The solution is now generally available on AWS and Azure.

“As a leading global appliance company, Electrolux is committed to delivering the best experiences for our consumers at scale; we sell approximately 60 million household products in around 120 markets every year. Moving to Databricks Model Serving has supported our ambitions and enabled us to move quickly; we reduced our inference latency by 10x, helping us deliver relevant, accurate predictions even faster,” said Daniel Edsgärd, head of data science at Electrolux. “By doing model serving on the same platform where our data lives and where we train models, we have been able to accelerate deployments and reduce maintenance, ultimately helping us deliver for our customers and drive more enjoyable and sustainable living around the world.”

For more information about Databricks Model Serving, please visit https://www.databricks.com/.


Sponsors