Databricks, the Data and AI company and pioneer of the data lakehouse paradigm, is releasing Delta Live Tables (DLT), an ETL framework that uses a simple declarative approach to build reliable data pipelines and automatically manage data infrastructure at scale.
Turning SQL queries into production ETL pipelines often requires a lot of tedious, complicated operational work. By using modern software engineering practices to automate the most time consuming parts of data engineering, data engineers and analysts can concentrate on delivering data rather than on operating and maintaining pipelines, according to the vendor.
Delta Live Tables is an ETL framework that solves this problem by combining both modern engineering practices and automatic management of infrastructure, whereas past efforts in the market have only tackled one aspect or the other.
It simplifies ETL development by allowing engineers to simply describe the outcomes of data transformations.
Delta Live Tables then understands dependencies of the full data pipeline live and automates away virtually all of the manual complexity. It also enables data engineers to treat their data as code and apply modern software engineering best practices like testing, error-handling, monitoring, and documentation to deploy reliable pipelines at scale more easily.
Delta Live Tables fully supports both Python and SQL and is tailored to work with both streaming and batch workloads.
“The power of DLT comes from something no one else can do—combine modern software engineering practices and automatically manage infrastructure. It’s game-changing technology that will allow data engineers and analysts to be more productive than ever,” said Ali Ghodsi, CEO and co-founder at Databricks. “It also broadens Databricks’ reach; DLT supports any type of data workload with a single API, eliminating the need for advanced data engineering skills.”
For more information about this news, visit https://databricks.com.