Denial of Service Attacks Can Come Directly From Silicon Valley


The world changed over the last year. Future historians will complete their theses focusing on different quarters or even specific months of 2020. But one of the most overused cliches in thinking about this period of time has been the idea that “the more things change, the more they remain the same.” Let’s consider sports in 2020. Major League Baseball had a 60-game season, the NBA finals were played in October, and cardboard cutouts took the place of fans in every sport. However, the Lakers won the NBA finals, the Dodgers won the World Series with the Yankees playing deep into the playoffs, and Tom Brady went to his 10th Super Bowl. The more things change …

Why Are We Surprised?

No one should be surprised at the significant catastrophic security breach centered around SolarWinds’ Orion Platform last September, which affected approximately 18,000 customers directly with an unlimited amount of potential secondary effects. The attack may have come from Russian hackers, but it’s possible that there were other nefarious parties involved heretofore not considered.  

How does this type of breach occur? Occam’s razor, the postulate that the simplest reason is the most likely reason is worth considering. And the simplest explanation is that so many breached software systems are connected to the internet. The information superhighway is the neural network of global communications that unites all electronic things and respectively allows all electronic devices to be snooped, monitored, penetrated, and violated by all other electronic things. So, if the monitoring of a software system is necessary, and that monitoring is transmitted through the internet, then the 21st century’s soft underbelly is exposed. The same is true for development of that monitoring software, but only if the development system is connected to the internet, or anyone working on it is connected to the internet, or if anyone working on the software can somehow carry that software out of the semi-secure location in which they work. Why are we surprised when a significant security breach happens of this magnitude? Continuing with the Occam’s razor theme, the answer to this question is equally simple. Please pardon us for sounding very 1975 (we are getting up there in years), but simply stated: The best cybersecurity is concrete backed up by air. 

The magnitude of the SolarWinds Orion “hack” cannot be overstated because, regardless of how many high-priced consultants and veterans of the tech overlords of the six cities of Silicon Valley  (www.dbta.com/BigDataQuarterly/Articles/The-Six-Cities-of-Silicon-Valley-125014.aspx) attempt to create a softer perception of this disaster, every network and system that was touched directly or indirectly by the incursion will need to be rebuilt from the ground up. We say this because there will never be a comprehensive understanding of the degree and precision of penetration at one level or another or in one system or another. Recovering from this calamity will require an effort on the level of Y2K.

Old-School Methods

More importantly, a legitimate plan to prevent a recurrence or something worse will require that the business-critical applications at every one of the affected entities be secured via the “old-school” methods. Readers, young and old, should contemplate how a similar situation in the late 1990s would have unfolded. For example, consider a young database administrator in 1999 saying, “Hey boss, I have an idea. We should take our most critical data and store it remotely on servers being offered by this guy in Seattle. He was just named Time magazine’s ‘Person of the Year,’ and he sells books. Plus, he has extra storage space, and he is leasing it cheaply. No guarantees at all, but it should work!” This story would not have had a happy ending, as it is likely that the next conversation that the intrepid DBA had would have been outside of work and include the line, “Well, I’m between jobs.” The point of this sarcastic allegory is that in a mere 2 decades, the very philosophy of protecting the most critical application functionality and data has completely veered into the bizarre. In 2021, we casually use remote cloud services, all run by massive companies that have their own agendas that do not always coincide with their customers’ critical business needs, using those services. Again, the best cybersecurity is concrete backed up by air.

Extending our sensitivities regarding critical applications to the electrical, communication, military, and medical grids of Western civilization’s infrastructure, the apparent reason for using cloud resources is the perception of reduced cost that does not always live up to its contrived media coverage. That same point can be applied to connecting to the internet of these components of critical data infrastructure. To quote a March 2018 article in The Washington Examiner, “The federal agency that oversees the nation’s power grid was a prime target of nine Iranian hackers whom the Justice Department is indicting for ‘malicious’ cyber activity.”  Why is our power grid infrastructure even on the internet? Again, the best cybersecurity is concrete backed up by air.

A different but equally volatile situation is the tremendous power in the tech overlords’ hands to eliminate companies—temporarily or permanently—that choose to place their critical applications and data in the cloud. Regardless of the circumstances behind recent events or anyone’s personal opinions on the efficacy, responsibility, authority, or power of the cloud providers, hardware producers, and mobile application disseminators, the facts remain clear. They have the unprecedented power to shut down your company if you give them that power. Without a moment’s notice, they can perform the greatest denial of service attacks conceivable. The actions may be genuine or malevolent, but we know they are all legal, apparently, at least for now. There are many valid reasons to use remote hosting or cloud services. Non-critical scalability, bursting, simplicity of access, speed of allocation, and unique microservices are among those justifications. However, the perception of security and control over your critical data and cost reduction  should not be among the reasons. 

Critical Infrastructure Remains On-Premise

To summarize, in case the reader has yet to infer our point, it is our opinion that all critical infrastructure and data should remain on-premise. We believe that solid disaster recovery plans should be built and tested and that those DR systems must be sufficiently geographically dispersed to circumvent malicious actors of all stripes. The concept of the hybrid cloud is a perfectly elastic model to meet these critical requirements. Over the last few months, the world has seen that implementing impervious cybersecurity on the internet and protection from nefarious actors is nearly impossible. We have also seen that the most powerful tech overlords can immediately take complete control over and shut down a business that a moment before was their “customer.” One more time, the old-school security approach works because the best cybersecurity is concrete backed up by air.

Reference Items



Newsletters

Subscribe to Big Data Quarterly E-Edition