Newsletters




10 Ways In-Memory Databases are Helping Enterprises Get Ahead

Page 1 of 2 next >>

In-memory databases and grids have entered the enterprise mainstream. New offerings from pure-play in-memory database providers as well as the large relational database management systems vendors are helping organizations that are scrambling to keep pace with the demands of an always-on, real-time economy. These in-memory databases are emerging in many forms—from extensions of relational database management systems to NoSQL databases to cloud hosted NoSQL databases.

These new technologies couldn’t come a moment too soon. With enterprises of all stripes seeking to compete on analytics, speed is everything. Enterprise systems are being overwhelmed by new workloads coming through web and mobile channels, as well as through the Internet of Things, making faster processing is a necessity. In today’s ultra-competitive economy, enterprises need information and insights as soon as they are generated. Traditional business intelligence tools— weighed down by the challenges inherent in disk-to-processor-based workloads—are not meeting this demand, and enterprises can no longer afford to be held back by these outdated and latency-ridden database environments. In-memory databases and technologies enable decision makers to get to the information they are seeking rapidly and more readily.


For more articles on the In-Memory Revolution, download a special DBTA Thought Leadership Section here


These fast databases also are opening up enterprise data to new questions that decision makers have never been able to ask before on their slower, more cumbersome systems. “Shifting the data storage layer from disks to main memory can lead to more than 100x theoretical improvement in terms of response time and throughput,” according to a recent IEEE paper authored by Hao Zhang, Gang Chen, Beng Chin Ooi, Kian-Lee Tan, and Meihui Zhang. With in-memory, data is stored in the random access memory (RAM) portion of servers and computers, versus transferring it off disks. There is none of the latency as data makes round trips between disks and memory. This also offloads much of the mundane work around query building and report generation from IT departments. The ever-declining costs of memory are also making in-memory databases economically feasible. While in-memory technology has been on the market for many years, it’s only lately that the price of memory has dropped to the point where processing data in RAM compares favorably to moving it in and out of disks. There are also growing business demands for real-time analysis and insights that make in-memory a compelling option. Enterprises increasingly seek real-time capabilities, as well as the ability to manage information coming in from the emerging Internet of Things. As a result, in-memory is being actively deployed across many enterprises today— from online ad placement to financial trading to production systems. The benefits of in-memory, of course, depend on overall application performance—there may be issues that are occurring outside of any latency resulting from data moving between disk and processor. But there is a lot of disk-induced latency to wring out of infrastructures.

The benefits in-memory databases provide to enterprises, both for IT departments and for the business at large, are many-fold, including the following:

  1. Easier handling of big data. Enterprises of all types are now capturing and storing a wide variety of data—from images to documents to traditional relational data. In many cases, this data is being stored for longer amounts of time— potentially for decades. The data may be stored in older systems or storage formats, making it costly and time-consuming to extract and manage. In-memory databases also work well with Hadoop environments. In addition, in-memory databases work well for the deep data analytics performed against large datasets for sophisticated functions such as predictive analytics, employed by data scientists and analysts.

Page 1 of 2 next >>

Sponsors