Scaling For Uncertainty And Hypergrowth: What You Need To Know


When building a product or application, most systems designers understand that you can't have it all. Creating a high-performing and scalable system can be a daunting challenge since those considerations are often in opposition to one another—typically high performance in a single machine's memory space or scalable across several servers. 

This has a business impact. In fact, Gartner's recent data uncovered that 58% of IT executives reported an increase or a plan to increase emerging technology investment in 2021, including databases. Whether it's a brand new software startup or a Fortune 500 SaaS company, the early days of product development focus more on meeting current customer demand instead of lifetime scalability. But, an application is only as strong as the database that supports it. As organizations grow, so do the databases and scaling needs, but to scale naively increases costs and complexity faster than throughput.

Database and systems researchers spend their days thinking about how systems grow over time and how to ensure that at whatever scale a database system deploys, the data entrusted by users remains safely stored and rapidly queried even as throughput demands grow.

Ingredient #1: The Right Mindset

Databases (and data-intensive systems generally) are the backbone of modern IT. IT leaders need to consider whether a system can run its workload currently and in the future as volumes and workloads evolve in implementing a database. 

Some decision-makers are empirical; they measure needs and capabilities looking for a match. They understand that a given database can meet their needs today and for the longer term by understanding their workloads, data, and volume. Some decision-makers are more abstract, preferring to reason about a systems architecture looking for the elusive property of scalability to decide whether a system will be a good choice for the long term. Often this is a shortcut compared to measuring and understanding but given time and business pressure, it is understandable - but it is not infallible and has a tendency towards festishing architecture rather than fostering a deep understanding of the problem domain: a cargo train is highly scalable, but it requires a significant amount of infrastructure investment, and it's a poor choice for low-latency deliveries.

Some of the best system designers only use architecture as a rough guide, using empirical data to hone their designs. Establishing the mindset through facts and measuring important characteristics such as throughput, latency, and volume will help IT leaders deliver truly dependable systems. For example, thinking that deploying a scalable database will solve all of an application's problems is naive and unrealistic. After all, judging a car by the capacity of its engine tells you very little about its performance. At first glance, a 3-liter engine would seem powerful (at least to us Europeans) unless it's powering a tractor.

Ingredient #2: Laying the Groundwork 

As mentioned previously, a major challenge for a general-purpose database is balancing between good performance and scale. Whether it is a social network of a few billion people or a graph of a national electrical grid, a robust database should adapt well for any specific use case.

For example, Facebook has spectacularly scaled its graph engine, but only because its topology is well understood and the underlying hardware and software are adapted to support it. Social networks are neat, but their topology is very different from a logistics network, a healthcare network, a knowledge graph for aerospace, or a graph of a data center. Facebook has a specific graph problem (large-scale but with a simple topology), and they built specific infrastructure to solve that problem.

There are often no easy choices in infrastructure design. Still, some can be simplified by choosing components or services that already uphold the appropriate scale and performance SLAs (service level agreements). A systems designer can factor metrics into their architecture and use them to reason about overall performance, throughput, scale, and fault tolerance.

Ingredient #3: Setting Up For Sustainable Success

For long-term success, it's important to fine-tune and adjust your system's architecture as your database expands. For example, graph databases have particular access patterns, and exploiting locality in sub-graphs (also known as "neighborhoods") enables high-performance queries. Keeping these "neighborhoods" together in a single memory space is critical— spreading these "neighborhoods" across memory spaces should occur only when the significant performance penalties under the overall workload are understood. As a database evolves, it's crucial that the query planner understands the topology changes and is consistently replanning queries to remain fast. Without a query planner, the database will slow down over time. Additionally, it's essential to have intelligent and adaptive security systems so the machinery can accurately pull queries without losing speed. 

Scalability and its peers, performance and efficiency, can be daunting, but it's important to consider early on in the product-planning phases to curb issues down the road. While vendors and customers will have different needs and challenges, both can understand the importance of scalability in a product roadmap. By understanding system needs, laying the appropriate groundwork for them, and setting up the systems to adapt and grow over time, scalability can be attained without compromising speed or accuracy. 



Newsletters

Subscribe to Big Data Quarterly E-Edition