Exploring The Key Pillars of a Modern Resilient Data Architecture

<< back Page 3 of 5 next >>

Establishing data architecture com­ponents before embarking upon a data project “is a crucial step in understand­ing how the data will be used and how it will bring value to the business,” said Pey Silvester, vice president of engineering at “For resilient data architecture, it’s important for organizations to reference the big data architecture framework as a blueprint for data infrastructures and solutions. This framework defines how big data solutions should work, the compo­nents that must be used, how infor­mation will flow, and critical security details.”

Multi-cloud approaches also should be incorporated into resil­ient data architecture planning. “With data now living across mul­tiple clouds, a resilient architecture must have the capabilities to help you protect data no matter where it lives,” noted Preston. “This includes a multi-cloud control pane built off three principles: no infrastructure, global policies, and self-service with central oversight. With self-service, you can delegate responsibility to the data and application owners while retaining centralized control.”

Storage is also an important part of the equation that often gets over­looked, said Rivero. “Underlying storage components must prioritize the quality of both the data itself as well as the accompanying metadata while having the ability to seamlessly scale or replace storage options,” he said. “These attributes are charac­teristic of cloud-based technologies that take most of the computing, processing, transfer, and storage activities off-site to secure, managed environments.”

In addition, from a technical per­spective, an on-premise resilient data architecture should include “physi­cally distributed resources and load balancing at each tier, and infrastruc­ture monitoring, app monitoring, and service monitoring—CPU load, memory usage, query execution, interaction between different ser­vices and app components, and the number of bytes,” said Apshankar. At the cloud level, a resilient data architecture should be built on “an open and seamless data architecture that includes data preparation tools, data visualization tools, and agile collaboration tools.” The ideal approach needs to be “infrastructure as code, which includes multiple codes, data centers, environments, templates, and executions.”


Of course, building a resilient data architecture is not an overnight process. There are obstacles that will arise—not only technologically, but organizationally as well. “Data is often created in and constrained by internal silos that prevent real-time information from being widely shared. Without access to data, ana­lytical efforts are duplicated across teams and otherwise rich data becomes redundant,” said Mehta.

IT teams are struggling to keep pace with today’s data resilience challenges “because most of them are still running backup systems that were designed for a different era,” Preston warned. On-prem backup systems require significant enhance­ment and security upgrades to func­tion in today’s environment of ran­somware attacks multiple times a minute, Preston said. “In addition, IT teams are significantly under­staffed due to a global talent short­age. These two things together mean more things to do?enhancements to the backup system with fewer people to do them.”

<< back Page 3 of 5 next >>