Newsletters




Meeting High Performance Requirements with Memory-First Database Architecture


With data volumes spiraling out of control, companies are searching for less expensive and more efficient storage technologies. The most common approaches for storing data have been with disk and in-memory. There are  advantages to each method, but there is a growing trend now toward moving to memory storage and away from disk. 

The factors that are involved with these decisions were discussed in a recent DBTA webcast, “Building a Memory-First Database Architecture for Speed and Scalability,” presented by Joe McKendrick, Unisphere Research analyst, and Shane Johnson, product marketing manager with Couchbase, provider of Couchbase Server, a document-oriented NoSQL database.

A major factor in the emerging approaches to data storage is the value now being placed on data as companies find greater appreciation for what that it can mean to their bottom line, coupled with the need to appropriately store the high volume of data they are amassing.

Traditionally, organizations have used disk storage with their databases. But with the volume of data, the disk option has struggled to keep up with the data storage needs of companies.

Johnson discussed how in-memory databases are performing and a few NoSQL database structures. With the cost of memory coming down and intersecting with the rise of big data, it is becoming more apparent that memory storage is the way to go, he said. Currently, the Dell Intel Xeon Processor E7-8870 and Amazon Instances have demonstrated what can be done with in-memory data storage.

The various NoSQL databases all use memory but leverage it a little differently. The first one is a memory-second architecture. Basically, data is stored in the disk first and then stored in the memory second. Some issues with this architecture are data updated in memory can become invalidated and this method is also complex.

Next, memory-indirect is copying data from the operating system to memory. While this is viewed as more efficient than memory-second, memory-indirect still suffers from files fragmenting and performance issues with compacted files.

A third method is memory-first. In this approach, data information is stored in the memory cache which allows the database to perform faster reads and writes. “By optimizing data for disk and optimizing data for memory you are going to have a lot better performance and flexibility,” explained Johnson. “What might be more practical is to identify a working set and allocate enough resources in-memory.” This provides cost savings and allows for the companies’ most important data to accessible faster.

“In our own surveys, we have repeatedly found that three quarters of IT executives that we speak with are not equipped to handle an on-demand data-driven enterprise,” stated McKendrick.  Companies have now begun to look towards memory data storage as a better option to disk storage. Memory architecture is not a new technology; there are just a few factors that are making it a more attractive option in the industry today.

Memory has also gotten significantly less expensive over the years making it financially feasible to store more and more data in-memory. McKendrick recommended the in-memory approach to data storage, and noted that, according to Unisphere’s research, the industry may be trending to in-memory data storage also. Thirty-two percent of companies now report having in-memory databases compared to 18% in 2012.  When considering in-memory data storage businesses need to make evaluations on the basis of the capabilities of their own IT team and the ROI of using this type of storage.  Many within the industry anticipate in-memory data storage gaining ground and believe it holds potential for data storage.

A replay of the webcast “Building a Memory-First Database Architecture for Speed and Scalability” is available for replay at www.dbta.com/Webinars/664-Building-a-Memory-First-Database-Architecture-for-Speed-and-Scalability.htm.


Sponsors