Newsletters




Emerging Database Technology Promotes Business Resilience


North American businesses are collectively losing $26.5 billion in revenue each year as a result of slow recovery from IT system downtime, according to a recent study. To protect against unexpected outages, IT organizations attempt to prepare by creating redundant backup systems, duplicating every layer in their existing infrastructure and preparing elaborate disaster recovery processes. This approach is expensive and only partly effective, as demonstrated by the string of notable outages, and can be seen, at best, as a way to minimize downtime. Major social networking companies, such as Google and Facebook, have figured out how to scale out application stacks rather than scale up vertically.

Evolving the Infrastructure Stack

This is achieved largely through the introduction of new technologies that augment traditional database technology. This results in operational advantages including improved response time and built-in redundancy.  Unfortunately, it comes at the cost of a significantly more complicated development model and increased development cost structure.  These models reflect the state of information technology today.  While the internet and affordable computing power have dramatically altered how applications look and feel, the fundamental technologies, such as relational databases, have stayed relatively the same for decades.  New layers are added that bring additional capabilities along with additional complexity.   For enterprise software to achieve similar advantages without those additional operational costs, database technology and the infrastructure stack must dramatically evolve. 

Complexity Meets High Availability

The complexity of even small business networks today dwarfs those of large enterprises 15 years ago. While replication, server virtualization, virtual machine migration, SAN arrays, converged networks and other relatively new technologies provide benefits, implementing them comes with significant costs that many organizations overlook. Complexity makes implementation errors and system failures even more likely.

Ironically, the message is that these enterprise systems are so complex they are likely to fail; yet to prevent that, you need to add even more complexity!

Organizations make significant investments in order to achieve high availability and business continuity, and every time a new application is deployed, these expenses increase as the redundant infrastructure is scaled up.  Because of the intrinsic complexity in current application deployments, attempts at redundancy are often ineffective and application availability suffers.

What's now required is an application infrastructure that inherently provides high availability without the additional dedicated infrastructure needed with 2n or 3n redundancy.  If a site became unreachable due to an outage, geographic redundancy would preserve the availability of applications and data.   Until now, limitations of traditional database systems to provide reliable, accurate update anywhere capabilities have prevented these types of architectures.  

Application Scale and Performance

Emerging technologies that fundamentally decentralize applications and data greatly improve business resilience and simplify disaster and network recovery.  They are designed to handle less-than-perfect performance from all components of the infrastructure.

New approaches to scalable application computing simplify IT infrastructure by combining the various required elements - including storage, load balancing, database and caching - into easily managed appliances or cloud instances. Unlike conventional infrastructures where scale, redundancy, and performance are increased by "scaling up" and adding additional tiers of components, this provides an architecture where additional capabilities are added by "scaling out," simply adding additional, identical nodes.

These systems automatically store data across the nodes based on policy, usage and geography, and intelligently deliver information when and where it is needed. All information is replicated across multiple nodes to ensure availability.  If a node fails, users are re-routed to other nodes with access to their data so that productivity does not suffer.   When the original node recovers, it resumes participating in the flow of data and applications and local users are reconnected to it.  The system automatically synchronizes data in the background so no data is lost and performance is not compromised.  These new technologies leverage existing investments in enterprise software which are heavily dependent on SQL and ACID semantic transactions.  They also leverage developer skillsets such as Java so existing application ecosystems can readily take advantage of this innovative technology.

Geographic Spread and Support for Remote Workers

Organizations today are more geographically dispersed than ever and many IT organizations have dedicated significant resources to ensure adequate response time performance for their remote offices around the globe. These organizations have often invested heavily in infrastructure; such as WAN optimization, federated applications and high speed network connections. Today's typical application infrastructure requires a variety of components - a pair of hardware load balancers, application servers, database servers as well as storage for their data.  Moreover, to attain redundancy, much of this infrastructure needs to be duplicated off-site.

The complexity of this type of infrastructure requires continual investment simply to maintain the systems and components. Yet poor performance and spotty availability are often a reality for those working in remote offices.

Taking a new approach to application deployment can result in significantly lower costs.  Using inexpensive, identical nodes at each site, and eliminating the need for a separate failover site could dramatically reduce initial capital expense. Another factor contributing to lower costs is the simpler, fully integrated stack, which makes applications much easier to deploy, manage and troubleshoot.

Is Data Center Consolidation the Solution, or Are There Other Approaches?

Despite business globalization, with customers, partners and employees more likely than ever to be located around the world, in recent years there's been a drive to consolidate data centers. The underlying assumption is that consolidated data centers will allow information technology organizations to better control resource costs for space, energy, IT assets and manpower.  Valid concerns about availability and performance for users in remote locations are often overlooked in light of the expense and complexity of achieving global scale-out with traditional database applications. Unfortunately, the consolidation cost savings aren't always as dramatic as anticipated and new problems are often introduced as a result.

Substantial problems remain with maintaining availability and performance for remote workers. Additionally, high-speed WAN links used in attempts to address these problems can be prohibitively expensive, particularly outside North America.

If all the required application infrastructure components resided on comprehensive nodes, the nodes could be placed in small and remote locations. Since virtually all of the supporting infrastructure for an application would be included in a node, performance and responsiveness would improve at each site.

Leveraging Efficiencies of Virtualization and Cloud Computing

Ongoing support costs would also be reduced because scaling an application in this way is much easier than with traditional deployments. If a site is growing and needs greater scale, a node can be easily added at that site. This approach only makes sense if no additional IT staff is required at the remote sites.  For instance, the addition of a node should be easy enough that non-IT staff can do it.

As organizations look at ways to leverage the economics and efficiencies of virtualization and cloud computing, it is becoming painfully clear that the traditional approaches to infrastructure that underlie most of today's cloud offerings do not effectively enable the potential agility of these new models.

Today, organizations are wrestling with ways to take advantage of cloud economics while maintaining control of their data and providing improved support for remote users. Now is the time for technology that enables options for deploying on-premise, in the cloud or a combination of both.

This is the next phase in truly enabling IT organizations to deliver applications with consistently high availability and performance to global and mobile workers, while maintaining an elastic and robust infrastructure within the constraints of tight budgets.

Conclusion

The future of enterprise computing requires truly distributed database computing that enables remote workers to be highly productive. Simplified, smarter application platforms that integrate disparate technologies such as data storage, database, application servers and load balancing will surpass existing solutions in cost, manageability and reliability.

Fundamental architecture changes and technologies are emerging that are resilient, and are enabling IT professionals to provide solid infrastructures, eliminate downtime and deliver applications with consistently high availability for global and mobile workers.

 


Sponsors