Newsletters




Predictable Performance For NoSQL Applications

<< back Page 2 of 2

Sharding is a known and well understood method to gain excellent performance by distributing the workload across multiple servers and/or storage devices.  The ability to spread the workload (storing the data on a hard disk drive) allows latency and response time to remain unchanged even as the amount of data grows over time. By growing the number of shards as required by the workload, the performance of an increasingly larger system can be maintained. More shards allow for more I/O paths to the slowest component. A system that can interactively add more shards based on application demands will show steady and consistent performance as the use of the database increases over time. The figure below shows how different shards can be assigned to different storage nodes.

Figure 4

In order to balance the I/O load on the available servers and storage devices, an efficient algorithm needs to be implemented. Just sending a file (as an example) to server 1 and the next to server 2 and the next to server 3 may not result in a balanced I/O strategy. If one of the files is larger and takes more time to send over the network and store on disk, then the system software must be aware of this and make adjustments accordingly. In many systems, a hash function is used to determine which node to send the data to. A good hash function will evenly distribute data over the shards (assuming one shard per storage device).

Another performance metric that can and should be monitored is the performance of the storage node itself. If the hash function decides to send the data to shard 1, but the server or storage device is busy, then the data will be held until the storage device is able to accept more data. Another challenge is how busy the network is at that instance.

An interesting concept with benchmarking a database performance is how the database responds over time, as the storage devices fill up, and system memory (RAM) gets used.  Many benchmarks measure the performance of a system to do a specific task. However, it is important to view the performance of a database system over a long period of time, which is typical of most production environments. While many systems may show exceptional performance at the start, a software system that must stay active for long periods of time and perform well is much harder to design and implement.

As a database grows in a horizontal manner, the data will become distributed, residing on many different storage devices. Finding and retrieving the desired data can become time consuming if the system is architected in a way that the exact location of the data is not kept current. In a poorly designed system, if a request to retrieve some data is sent to the storage server that does not contain the data, the request must be forwarded on to another storage server. This continues until the data is located. Then, the node containing the data must communicate back to the requesting client.  This can take a significant amount of time and can lead to unpredictable performance, as the latency penalty will affect some of the data but not other data. However, every client keeps track of where the data has been stored, a single request can be made to the server where the data is stored and a return acknowledgement message can be sent. In this single request, single response architecture, the latencies involved in requesting data are constant. The figure below graphically shows the number of messages and data flow when the Application does not keep track of where the data resides.

Figure 5

When the Application driver keeps track of where the data is stored, the number of messages and data flow is much simplified, resulting in better performance and a predictable response as well.

Figure 6

The result of understanding the underlying computer architecture, network topology, and software optimizations results in performance of a NoSQL system that is higher performing and more predictable. In the figure below, the system that is optimized demonstrates high performance and predictability, compared to a slower, and more random latency.

There are a number of scenarios in various industries where the fast and predictable performance of retrieving information from a NoSQL database is critical for business reasons.

  • Online customer  service – when a customer visits a web site for some kind of customer service, the response needs to be fast, and predictable. Otherwise, different customers (or the same customer at various visits) will experience different levels of service, and some will become frustrated and leave the web site.
  • Customer loyalty programs – if offers, as a customer walks through a store or attempts to use points at a checkout, the performance of the system that is making these decisions needs to be tuned and sized to work in a predictable manner. If not, the opportunity to interact with the customer may be lost.
  • Web response – it is critical that a web service respond to its users in a predictable manner. If the system fails to respond within a designed time period, then users may leave the site and go somewhere else, whether for information or for purchasing.

Understanding the Performance of NoSQL Databases Over Time

NoSQL database systems are designed to store data differently than relational database systems. NoSQL databases are known for their fast storage of a range of data types. However, when evaluating different systems it is important to understand the performance over time. Databases are meant to remain active over long periods of time as more data is stored.  Organizations and business are increasingly sensitive to customer expectations, and predictable performance of their database is critical.

<< back Page 2 of 2

Sponsors