Building Successful Big Data Projects with DataOps

DataOps is an emerging set of practices, processes, and technologies for building and enhancing data and analytics pipelines to better meet the needs of the business.

The list of failed big data projects is long. They leave end users, data analysts, and data scientists frustrated with long lead times for changes.

At Data Summit 2019 Christopher P Bergh, CEO, head chef, DataKitchen, presented his session, “How to Succeed with DataOps Today,” exploring how to make changes to Big Data, models, and visualizations quickly, with high quality.

The sixth annual Data Summit conference is being held in Boston, May 21-22, 2019, with pre-conference workshops held on May 20.

“People are starting to talk about DataOps,” Bergh said.

DataOps has resulted in a transformative improvement in Software Development, Bergh explained.

DataOps can provide continuous delivery of analytics. Customers can deliver insights faster, ensure high quality, add features at the speed of business, and automate/orchestrate the complex environment of people and technology.

Right now teams have high errors, they deploy too slowly, and they struggle to develop, he explained.

There are seven steps to utilizing DataOps successfully. This includes:

  • Orchestrating Two Journeys
  • Adding Tests And Monitoring
  • Using a Version Control System
  • Branch and Merge
  • Using Multiple Environments
  • Reuse & Containerize
  • Parameterize Your Processing

This Data Summit 2019 presentation is available for review at