<< back Page 2 of 2

The In-Memory Computing Landscape in 2020


In-memory computing is playing a vital role in solving this challenge, providing the speed and scale necessary for HTAP. HTAP enables a system to perform real-time analytics on a company’s operational dataset without impacting performance. By running real-time analytics on the operational data in RAM with massively parallel processing (MPP), an in-memory data grid can deliver the performance at scale required for HTAP, providing both real-time transactional and analytical processing using the operational datastore. HTAP also has the long-term cost benefit of reducing the scope or eliminating the need for a separate OLAP system.

In-Memory Computing and Mainframes

The use of mainframe computers remains pervasive in the financial services industry. The IBM Z platform, for example, is used by 92 of the world’s top 100 banks, all top 10 insurance organizations, and 64% of the Fortune 500. However, firms that rely on a mainframe for transaction processing may still want to implement a DIH to create real-time business processes based on a combination of operational and historical data.

In-memory computing platforms that have been optimized to run on a mainframe enable these firms to take advantage of both DIH architectures. For example, a company can use an in-memory computing platform to create a SQL-driven data access layer that runs on the mainframe. The DIH can connect to data sources, including operational databases running on or off the mainframe, as well as a portion of the data held in their data lake. Processing can then be run on the combined dataset held in the in-memory computing platform engine at the heart of the DIH to drive real-time business processes. This can enable a financial institution, for example, to obtain a 360-degree customer view based on analyzing all the data for a particular customer stored in the firm’s operational database and historical data lake to drive upsell or cross-sell programs or to drive seamless customer interactions across all the firm’s customer touch points.

Non-Volatile Memory

One of the most exciting emerging developments in memory technology is in the area of non-volatile RAM, or NVRAM. Nearly all computing today still relies on separating very fast volatile memory (RAM) used for running applications from slower non-volatile memory (hard disks, SSDs, etc.) used for storage. The challenge with this approach is that applications and data need to be loaded into RAM from storage each time a computer is turned on, and all the data in RAM is lost each time the computer is turned off. This means there is always a potential for data loss in the event of a system crash, and the larger the database, the longer users must wait in the event of a restart.

Today, as data requirements scale to terabytes and petabytes, the situation has become critical. Strategies for preventing data loss have become more cumbersome and expensive, and it can take hours or days for data to load into RAM on even the most powerful systems. However, new non-volatile RAM memory technology, such as Intel Optane, has become generally available, and as the price begins dropping, NVRAM will become a vital solution for protecting data and accelerating large-scale database systems. NVRAM can also be combined with in-memory computing platforms to allow companies to unite in-memory speed with the lower cost and durability of non-volatile storage in order to fine tune the balance between optimal performance, data protection, and overall system cost.

The In-Memory Computing Platform

The key technology powering many of these computing advances is the in-memory data grid, which is typically delivered as part of an in-memory computing platform. An in-memory computing platform pools the available RAM and compute of a server cluster, which can be easily scaled by adding nodes to the cluster. By maintaining data in RAM, the platform eliminates the delays caused by accessing data stored in disk-based databases. Further, by utilizing the MapReduce programming model to distribute processing across the cluster, the platform also provides MPP and can minimize or eliminate movement of the data in the grid across the network prior to processing.

This combination can improve application performance by up to 1000x and create a common high-performance data access layer that makes data from many datastores available in real-time to many applications. In-memory computing platforms can typically be run as a standalone in-memory database or as an in-memory data grid inserted between an application and its existing data layer.

Businesses undergoing digital transformations are under tremendous pressure to achieve unprecedented levels of data processing performance and scalability to support their real-time business processes. At the same time, we are witnessing an extraordinarily rapid evolution of in-memory computing technologies designed to support these business initiatives. This means that developers and system designers must fully understand the potential of in-memory computing to impact their business models, and they must pay close attention to the latest developments. As with all new technologies, internal teams should rely on third-party experts if they don’t have the expertise for such evaluations.

<< back Page 2 of 2


Newsletters

Subscribe to Big Data Quarterly E-Edition