Edge by Design: Proactive Ways to Boost Security and Compliance in Cloud-Edge Hybrid Deployments

COVID-19 placed intense business pressure on organizations, to which they responded with accelerated technology investments to cope with the added strain the pandemic placed on their operations and enterprise data architectures.

These investments helped a lot of firms stay in business, but the rush to digitize came with a downside: Many companies now struggle with newly complex hybrid architectures that aren’t always optimized for security and performance. In the process, they’re also finding the growing adoption of edge computing as part of this digitization surge is adding even more challenges around scale, complexity, compliance, and security.

For more articles about the future of data management in 2022, download Big Data Quarterly: Data Sourcebook (Winter 2021) Issue

Edge computing allows processing and advanced analytics to happen at, or near, the source of data—reaping benefits in per­formance, latency, and cost that are driv­ing a 12% annual market rate, resulting in a forecasted $250 billion market by 2024, according to IDC. Now is the time to put better strategies around these edge imple­mentations so they fit more safely and easily into the wider enterprise environment.

Edge Computing Adds Both Capabilities and Complexity

The pandemic threw companies into innovation overdrive, and a big part of this were major investments in edge com­puting. Firms applied edge technology to a range of use cases—especially in man­ufacturing, energy production, facilities management, health monitoring, and other settings where COVID-19 created significant business challenges due to workforce health concerns and mass tele­work models.

Even beyond the pandemic, the value of edge computing to the enterprise is compel­ling: Given that edge computing happens near the source of data, instead of having to go back and forth with a central cloud server, companies benefit from lower data transfer costs as well as from capacity gains in bandwidth and real-time analytics. But as edge computing gets added to hybrid cloud or multi-cloud environments, so too do the enhanced challenges of visibility and control.

Getting visibility into assets across the digital ecosystem—something that’s diffi­cult enough with hybrid cloud and on-prem architectures—becomes exponentially harder with edge computing, given the dra­matic growth in data these systems bring to the enterprise. Adding edge computing to the mix brings the challenge to an entirely new level, especially given the steep data security and compliance mandates that govern most business IT environments and define the risk management agenda.

Better Visibility Into the Edge as a Starting Point

Organizations need visibility into assets and architectures in order to discover and optimize where their data resides and to record topologies accurately. The conun­drum is that edge computing by definition happens without necessarily going to a central cloud server or on-prem system. So how do you get visibility?

It turns out that basic infrastructure or connectivity is not the main culprit in block­ing visibility. AWS Outposts, Azure Stack, and other hybrid cloud technology services typically include extended cloud manage­ment controls and APIs to help accommo­date edge deployments. Also, new 5G net­works benefit edge computing. In addition, there’s momentum toward more standard­ization, such as edge-compliant hardware, to proactively design components to incorpo­rate current or future edge implementations.

Instead, the biggest visibility challenge with edge computing is a function of its power: How do we make sense of the additional deluge of machine, sensor, and related edge data that we’re suddenly able to access through these systems? By some esti­mates, 75% of enterprise data is expected to be created and processed at the edge by 2025. That’s one indicator of the sheer vol­ume of data created by the edge—some­thing that can easily grow to an unmanage­able scale for many organizations.

In addition, edge applications are often dynamic, rather than persistent. This means services may be only briefly and/or intermittently present, as with certificate management or digital identity systems. Dynamic applications spinning up and down can wreak havoc on traditional configuration, application dependency mapping, and visualization approaches. This can also be a system administrator’s nightmare for firmware version control, patching, and related tasks—and adds exposure to cyber-threats such as end­point and hardware spoofing attacks.

IT decision makers could use a road map for optimizing edge computing into their best-of-breed, hybrid-cloud systems.

Four Ways to Become More Proactive

Every organization’s journey to the maturity of their hybrid-cloud-edge envi­ronment will vary depending on the busi­ness context. That said, a few best practices generally apply—and can be generally understood as an extension of DevSecOps and security by design principles, espe­cially in the critical realms of asset integra­tion, system compliance, and governance. In that spirit, think of these four tips as the basis for an “edge by design” mindset:

  1. Factor the edge into system configu­rations. Make sure to have the unique qualities of edge computing in mind early as you configure, patch, and update systems. Use edge-related meth­odologies, such as the open source con­tainer-orchestration system Kubernetes, for automating computer application deployment, scaling, and management. And, given the volume of data involved, be sure to automate wherever possible to keep track of changes at scale.
  2. Optimize hardware and infrastruc­ture to be edge-compliant. Even at scale, ensure a “route of trust” between software, hardware, and the edge sys­tems they’re tied to. Wherever pos­sible, use edge-compliant hardware and comprehensive device identity verification protocols. Ensure third-party risk management (TPRM) is edge-aware—particularly in ware­house, delivery, and other situations in which endpoints, such as delivery scanners, may be in the hands of thousands of workers.
  3. Plan for edge considerations in appli­cation dependency mapping. Given the huge and variable amounts of dynamic data created by edge com­puting, be sure to plan for the impact of edge programs and performance on your larger architectures. For instance, if your edge systems have petabytes of machine-performance data, but the cloud system designed to generate related monthly compliance reports can’t handle such volume, consider a heuristics approach to share metadata or periodic rollups for systems that may not need, or be able to handle, the full granular upload.
  4. Enhance and automate monitoring, at scale. This final recommendation ties back to the visibility challenges of scale and complexity mentioned ear­lier. Because of the scale, organizations leveraging edge systems must moni­tor thoroughly, whether it’s scouring edge-related dependencies, runtime versions, library vulnerabilities, or other process considerations. Look for ways to automate this monitoring wherever possible to keep pace with the massive volume and dynamic nature of edge data and operations.

Looking Ahead

Hybrid environments have never been easy, as cloud/on-prem migrations have shown. Adding edge computing to the mix can be a challenge, but it’s ultimately well worth the effort to enable the assets to deliver the performance that will give your organization a meaningful competi­tive advantage. 


Subscribe to Big Data Quarterly E-Edition