Newsletters




Fast Restart: The Key to Highly Available, In-Memory Big Data


A profound shift is occurring in where data lives. Thanks to skyrocketing demand for real-time access to huge volumes of data—big data—technology architects are increasingly moving data out of slow, disk-bound legacy databases and into large, distributed stores of ultra-fast machine memory. The plummeting price of RAM, along with advanced solutions for managing and monitoring distributed in-memory data, mean there are no longer good excuses to make customers, colleagues, and partners wait the secondsor sometimes hoursit can take your applications to get data out of disk-bound databases. With in-memory, microseconds are the new seconds. 

Initially, architects treated in-memory data sets a lot like glorified caches, with legacy databases sitting behind them.  However, more and more, as enterprises experience the unprecedented performance benefits of in-memory computing, they are moving all of their data into memory. Retailers are serving entire product catalogs from RAM. Financial services firms are keeping all of their transactions in memory. And, in some cases, companies ditch their legacy databases entirely, instead relying on in-memory data stores as the systems of record that power their businesses.

Of course, with so much high-value, mission-critical data in machine memory, architecting a platform that is not only fast and scalable, but also highly available, has become a key imperative for enterprise architects. One of the most important availability challenges posed by in-memory stores at big data scale is bringing them back online after maintenance or an outage. Without giving careful thought to this problem, your in-memory architecture may not be able to provide the predictable, extremely low latency that your business demands.

What’s different about getting in-memory data sets back online when they are big data-sized?

When bringing an in-memory data set online after maintenance or outage, three concerns must be addressed: (1) making sure all in-memory data that was held at the moment prior to the downtime persists; (2) making sure that changes (writes) that occurred during the downtime are not lost; and (3) getting the data set online as fast as possible to ensure overall availability.

The business impact of failing to deliver on any of these three can be devastating. For example, a Fortune 500 online payments company that uses in-memory data management for fraud detection, holds tens of terabytes of data in memory, including transaction histories, rules for detecting fraud and events that correlate to increased fraud. A Fortune 150 retail brand keeps all of the product information for its e-commerce website in memory. A top global shipping company stores GPS, traffic, weather and flight data alongside current orders to route packages efficiently. When hardware fails, or recovery from maintenance takes longer than expected, the impact can be measured in the millions and hundreds of millions of dollars in lost sales, wrongly accepted fraudulent transactions, penalties for failing to honor service-level agreements (SLAs) and customer dissatisfaction.

At this point, you might be saying, “What’s the big deal? Just reload the in-memory data set from the database!” If you're only keeping a few gigabytes in a caching tool, like Memcached or Amazon’s Elasticache, that might be an acceptable solution. However, when you’re talking about terabyte-scale in-memory stores, rebuilding them from a disk-bound database could take days. Hundreds of terabytes? Make that weeks. Aside from the time your server is offline, your availability is also taking a hit because reloading from a central database can severely impact that database’s availability for other processes. As the saying goes, “Quantity has a quality all its own.”

A Look at Restartability Approaches

To deal with the in-memory restart challenge, some developers initially turned to open source databases optimized for fast retrieval, though these can have significant limitations when it comes to quickly restoring in-memory data stores.

Let’s examine some examples of how these limitations play out. One option is to use a key/value store designed for rapid retrieval of data in response to queries. With source code annotations, a Java program can be made to manage transactionally consistent, reliable and persistent data. Though key/value storage is straightforward in this scenario, it has problems at scale, most notably with throughput, but also with intermittent pauses as Java manages garbage collection operations. In particular, restores of large data sets require multiple disk reads from disparate disk locations, which can get expensive in terms of time. 

Companies that sell in-memory databases perform data recovery using regular save points of a full image of the database, along with logs that capture all database transactions since the last save point.  This can become a lengthy and complex process. Restores start from the latest save point, with logged changes reapplied. Undo logs must also be applied for uncommitted transactions saved with the last save point, and redo logs applied for committed transactions since the last save point. The entire content of the row store is immediately loaded into memory. While the column store tables may be marked for preload upon startup, typically the restore procedure of column store tables is invoked on first access of this data.

Classic peer-to-peer in-memory data grids typically use a relational database for backup. This presents all sorts of challenges when a write-heavy application encounters bottlenecks while trying to manage synchronous updates for every change.

A Better Approach to Fast Restart

As  shown, many options for the recovery and restoration of large in-memory data sets are lacking when it comes to speed and complexity. It’s become clear that with the increasing frequency of terabytes of in-memory data, a new approach is required. And, the optimal approach must combines the flexibility of key/value pair storage with high throughput on both reads and writes in order to minimize downtime while also meeting strict SLAs on recovery time and recovery point objectives.

Here’s what you should look for when building your in-memory architecture:

Local persistence:  When restoring an in-memory data set, you don’t want to hit your central database. Rather, look for a solution that keeps everything you need for restart on local persistent media.

Read upfront, all at once: Read all of the data you’ll need all at once. If you have to keep going back to the disk to recreate your data set, you’ll pay a huge price in time.

Simple logs: This goes hand-in-hand with the previous point (one read).  Design your logs so that they store only the data and changes you’ll be restoring, with as little overhead as possible. Database overhead is what kills the performance of inferior solutions, resulting in multiple seeks to read each restore.

Optimize for solid-state drives (SSD) or traditional storage: Make sure your system is optimized to handle restart from either SSDs or traditional media. Like RAM itself, SSDs are coming down in price and are becoming more practical as persistent media for large data sets. With an SSD, if you have a read throughput of 1GB per second, a 1TB-size in-memory data set can be fully operational in just a few minutes.

Conclusion

As the demand for real-time access to big data accelerates, companies will move more and more data into machine memory. As a result, the demand for sophisticated data persistence will grow. Any in-memory solution must be evaluated not only for its usefulness during normal operations, but also for its ability to quickly restart operations for terabyte-scale in-memory data stores.

About the author:

Mike Allen is vice president of product management for Terracotta, the company behind the BigMemory in-memory data core solution.


Sponsors