The Next Great Frontier: Automating Data and Application Deployments

<< back Page 2 of 3 next >>

Skills and Data Literacy

Klang also pointed to the skill challenges needed to make automation work. “The skills needed for automation are most familiar to those with a software development background,” he pointed out. “But the things that need automating are often the domain of non-developer experts, such as DBAs or server administrators. High-functioning automation requires expertise from both.”

Along with skills, data literacy is another challenge that needs to be tackled if automation is to succeed. “To apply business context effectively and address changing requirements, you need to ensure that everyone working with this data has also achieved a level of data literacy around data matching and can access that information,” said Andrew Lee, vice president of emerging technologies and incubation at Syncsort. “What you need to do next is supplement these foundational skills with the right vocabulary and context so people can confidently appraise the data they’re using. The strategies and algorithms used, including criteria to support or reject a match, need to be clear and understood.”

DevOps and DataOps to the Rescue, Maybe

DevOps and DataOps are seen by many as the most effective organizational paths to automating data-driven applications. “The pressure to realize the greatest value from one of their most valuable assets, their corporate data, at an ever-faster rate is driving teams to look to DataOps for improved quality and reduced cycle times,” said Gaurav Rishi, head of products at Kasten.

The speed at which developers are containerizing applications to allow for rapid provisioning, continuous integration/continuous delivery automation, monitoring, auto-scaling, and self-healing have made Ops teams “unbelievably more efficient,” said Monte Zweben,
CEO of Splice Machine. Five or 10 years ago, many of these tasks were manual, error-prone, time-consuming, and expensive—resulting in much lower availability, he said. DevOps and DataOps “make an automated flow of applications and data much more achievable.”

These practices need to be baked into corporate culture and processes, however. “You cannot buy DevOps. You must do DevOps,” said Rajesh Raheja, senior vice president of engineering for Boomi. In addition, different teams have different approaches based on the school of thought and how much of a purist they want to be—such as adopting Scrum versus Kanban. Moreover, some companies have DevOps teams separate from development and operations. “Other teams have Ops take on the system reliability engineer role. Others still have SRE [site reliability engineering] operate independently with a more narrowed charter.”

Wallgren also sees mixed results from working deployments. “Whether teams practice DevOps true to form or in name only varies widely,” he said. “The most cringe-inducing comment I’ve heard recently is, ‘We don’t do DevOps, but we have a team that does that for us.’ If you have a DevOps team that may mean you’ve merely introduced another organizational silo into the mix, which isn’t going to improve things very much. Keep in mind that DevOps isn’t something you are, it’s something you do.”

There’s often a risk of DevOps being adopted for the sake of DevOps. “In some cases, it’s adopted almost religiously, which leads to the adherence to a philosophy without any considerations of implications,” said Remella. Data is only as useful as the the user’s understanding, according to Remella. For example, Remella said, if one thinks that everything can be done in the application development and infrastructure components—such as databases should only be the responsibility of the operations team and there is no need to develop in the database—the potential result is that the infrastructure gets inefficiently used while not meeting scale and latency SLAs.

Ajay Gandhi, vice president and digital performance evangelist at Dynatrace, sees AI and automation within DevOps as the first key to mastering IT complexity in order to intelligently automate cloud-native software delivery and cloud operations. “However, just any generic AI alone is not enough. It is crucial to implement purpose-built AI that can understand the full context of IT environments, including dependencies between containers, microservices, and applications.”

<< back Page 2 of 3 next >>