Bridging the Gap Between Data Management and DevOps

DevOps adoption can invite a wealth of opportunities for application development, yet data management continues to lack the speed, interoperability, and flexibility that prevents a successful DevOps initiative at many enterprises. Simultaneously, organizations are slow to adopt DevOps in the first place for a multitude of reasons, including concerns for data consistency, testing and deployment, legacy systems integration, and overall complexity.

Experts joined DBTA’s webinar, Achieving Greater Agility: Emerging Trends in Data Management and DevOps, to explore the ways in which enterprises are successfully adopting and leveraging DevOps and data management in tandem, particularly in the case of utilizing cloud native apps, microservices, and containerization.

Henry Tam, principal solutions marketing manager at Redis, offered his expertise in simplifying microservice architectures, a critical component of many organizations’ modernization efforts.

Despite its utility, microservice architectures pose several challenges due to its need to maintain isolation, including that of increased complexity, slow performance, scaling issues, lack of data consistency, and the added burden of legacy systems management.

Tam explained that a key principle for microservice architectures is domain-driven design, also known as polyglot persistence. This design ensures that for each service to cater to its own requirements and remain de-coupled, each service receives its own database that scales as needed.

However, domain-driven design incurs massive costs on licenses, as well as a monitoring and management burden due to heterogeneous technology.

To address these challenges, Redis’ enterprise-grade, open source core platform offers several design patterns for microservices to ease its adoption, including:

  • API gateway caching and rate limiting to reduce risk of outages
  • Command query responsibility segregation (cross domain) for transforming legacy write-optimized SQL to fast, read-optimized queries without coupling services
  • Query caching (single domain) to overcome performance issues with legacy databases
  • Interservice communication via a lightweight message broker using Redis Streams data structure

Michael O'Donnell, senior analyst at Quest Software, pointed to several statistics that illustrate the current challenges plaguing data management and DevOps:

  • According to a 2021 Security Compass survey, 96% of respondents said they would benefit from automating security and compliance processes, where 73% reported that manual security and compliance processes slow down code releases.
  • According to a 2022 Tigera report, 96% of respondents stated that security, compliance, and observability are the most challenging aspects of cloud native applications.
  • According to a 2022 CNCF report, 62% of organizations with less-developed cloud native techniques only have containers for pilot projects or limited production use cases.

With these stats in mind, O'Donnell illustrated a few trends that have been gaining steam in the past few years to accommodate the world of data management and DevOps: data mesh, data engineering, and metadata-driven ingestion.

While these trends can certainly introduce a wide variety of benefits to an organization’s data management and DevOps processes, O’Donnell emphasized following Quest’s 7 steps to maximizing data value with erwin by Quest data modeling and intelligence:

  1. Model: Design data architecture
  2. Catalog: Search and find data easily
  3. Curate: Enrich data with business context
  4. Govern: Apply business rules and policies
  5. Observe: Raise data visibility for proactive management
  6. Score: Automate data profiling and quality scoring
  7. Shop: Make trusted, governed data widely accessible

David Leigh, senior principal solutions engineer at BMC Software, explained why service orchestration and automation are essential for DevOps, despite commonly held beliefs that service orchestration and automation are not truly developer’s tools. These reasons include:

  • Speed and efficiency
  • Consistency and reliability
  • Scalability
  • Collaboration and communication
  • Cost efficiency
  • Risk mitigation
  • Focus on value-adding activities
  • Flexibility

Leigh emphasized the need for a renewed focus on ‘Ops’, bringing service orchestration and automation into the DevOps mix to address data operationalization. Full-stack operationalization through the end-to-end orchestration of data, tools, code, and environments for an application is critical in bringing up the speed, efficiency, transparency, and reliability of those apps.

However, complexity challenges remain a roadblock to operationalization and orchestrations, according to Leigh. Business users—with goals ranging from introducing better customer-facing applications, better ways to ingest data, and better ways of creating analytics and understanding—joining the DevOps lifecycle have surfaced hybrid complexity through a variety of technologies to accomplish those tasks. This then brings about siloed automation, where technologies are pocketed to the people that use them.

The solution, Leigh pointed out, is introducing transparency through mapping out and identifying all the technologies that need to work together and watching how that process is run.

This is accomplished through a service orchestration and automation platform—such as BMC Control-M and BMC Helix Control-M—that sits on top of an enterprise’s infrastructure to ensure that everything is running in its appropriate position, running properly, for the desired business outcomes, and in a fashion that allows developers to create an overarching set of automation in the same way they create their application-specific code.

For an in-depth review of data management and DevOps trends, including case studies, use cases, and more, you can view an archived version of the webinar here.