Putting Big Data to Work With DataOps

Big data, similar to relational data stores, presents challenges in terms of data gravity, both the attraction and the weight, which can cause friction and can be harmful to development and testing teams’ ability to get the data they need to be successful. While DevOps is the combination of software development  and operations teams working together to foster greater agility in software development and deployment, DataOps takes simple DevOps a step further, making data central to the process, according to Kellyn Pot'Vin-Gorman, Technical Intelligence Manager, Delphix, who will present a talk titled “Making Big Data Bite-Size With DataOps” at Data Summit.

Citing a Forbes article that states that by 2020, about 1.7 megabytes of new information will be created every second for every human being on the planet, Pot’Vin-Gorman notes that organizations are increasingly seeking to put their data to better use. By making data central to processes, and removing it as the pain point, organizations can move with greater ease, states Pot’Vin-Gorman, who cites the use of containers and virtualization as being some of the key components to the more streamlined approach. This is critical for organizations to successfully leverage the vast amounts of data that is being created and stored for business innovation and make it available to users in a more timely fashion, says Pot’Vin-Gorman.

Although DataOps is well understood by developers and operations teams, in general, it is still an emerging concept and needs to be more broadly accepted by enterprises to help create a more agile approach, says Pot’Vin-Gorman. She will use publicly available government data to demonstrate the advantages of DataOps in her talk.

Kellyn Pot'Vin-Gorman will present “Making Big Data Bite-Size With DataOps” at Data Summit on Wednesday, May 23, at 10:45 am

For more information about Data Summit 2018, and to register, go here.

To review the Data Summit program, go to