Building Scalable, Mission-Critical NoSQL Databases

NoSQL databases are becoming increasingly popular for analyzing big data.  There are very few NoSQL solutions, however, that provide the combination of scalability, reliability and data consistency required in a mission-critical application.  As the open source implementation of Google’s BigTable architecture, HBase is a NoSQL database that integrates directly with Hadoop and meets these requirements for a mission-critical database. 

“HBase 101”

HBase, a project of The Apache Software Foundation (, is a non-relational distributed database modeled after Google’s BigTable, and has been designed to provide a fault-tolerant method for storing large datasets of sparse and/or loosely structured data.  Like BigTable, HBase possesses several useful big data features, including compression, fast in-memory operation, and filtering on a per-column basis. 

Because HBase is a key-value NoSQL database built atop the Hadoop Distributed File System (HDFS), it is able to leverage other elements of Hadoop (, particularly the MapReduce framework.  Estimates show that nearly half of all Hadoop users today are using HBase to store and analyze big data. 

HBase tables can serve as the input and/or output for MapReduce jobs. 

It is the ability to process large datasets in parallel and in manageable steps that makes MapReduce particularly appealing.  The “Map” function normally has a master node that reads the input file(s), partitions the dataset into smaller subsets, and distributes the processing of these to worker nodes. During the “Reduce” function, the master node accepts the processed results from all worker nodes, and then combines, sorts and writes them to an output file or table.  This output can, optionally, become the input to additional MapReduce jobs that further process the data.  Programmers also appreciate being able to write MapReduce jobs in any language, including Java, C, PHP, Python or Perl, which makes it easy to incorporate existing algorithms into the HBase environment. 

HBase Challenges

Despite its many advantages, HBase is not without its limitations and challenges, and these have prevented more widespread adoption.  The basic challenges involve HBase operational issues ranging from inconsistent performance and potential data loss to complex administrative procedures. 

These challenges are rooted in how HBase is implemented and the resulting need to accommodate inherent limitations in the write-once Hadoop Distributed File System.  To address these limitations, HBase employs a layered architecture that requires several components, including an HBase Master node, multiple Region Servers (for distributing data throughout the cluster of nodes) and ZooKeeper (Hadoop’s coordination service for distributed applications)—all atop a Java Virtual Machine and the underlying Linux file system.

These layers introduce several single points of failure that can make HBase unreliable and difficult to administer.  In addition, the use of write-once storage can cause input/output (I/O) storms during compaction, which adversely affects performance. 

Some HBase users try to mitigate these issues by pre-splitting or sharding tables, and/or by scheduling compactions or performing them manually.  The overall operational complexity leads to a critical dependency on (scarce) HBase experts to keep the system running well.

HBase Enhancements

Because these operational challenges also occur with HDFS itself (with its Name Node, Checkpoint Node and Backup Node single points of failure), some distributions of Hadoop have re-architected the file system to make clusters easier to administer and to improve data protection. 

By unifying the file system, while maintaining compatibility with the HDFS API, MapReduce and other aspects of Hadoop, the complexities caused by layering no longer exist.  With a unified architecture, files and tables share a common namespace, and can be managed using volumes and directories that are fully protected with snapshots and mirroring.  The integration of files and tables into a single datastore also simplifies HBase application development, and increases performance, reliability and scalability.

With a unified namespace for files and tables stored in volumes, administrators can set policies, quotas and access privileges for different users and/or applications.  The use of volumes also makes it possible to implement advanced capabilities, such as data placement and multi-tenancy configurations.  Isolated work environments can be created for security, and tables can be placed on specific nodes to improve performance. 

With appropriate policies in place, developers can create their own tables without any administrative involvement.  Enhanced scalability enables developers to create as many temporary tables as needed to optimize application workflow.  And with expanded row and cell size limits, developers can store large objects, such as images or videos, thereby making HBase suitable for more applications.

Performance of HBase Applications

Performance of HBase applications depends on several underlying storage system factors.  These factors include I/O to disk, data locality, disk space overhead and skewed data handling (rewriting values for similar keys).  A unified file system minimizes the adverse impact of these factors, resulting in significantly better performance with no need for constant performance tuning.

Another performance-related issue with HBase involves compactions.  Compaction is the process HBase uses to merge multiple ?les in the Hadoop Distributed File System, and this task is often performed manually with the need to back up files before and afterwards.  A unified file system has no such requirement.

In addition to the ability to create management policies for files and tables, data volumes are a key enabling technology for some important reliability and data protection enhancements.  For example, administrators can schedule snapshots for HBase tables to accommodate recovery point objectives for each application.  In the event of a human or program error, point-in-time recovery is achieved by accessing the snapshot version of the table(s) involved.  The recovery can be as frequent or as granular as desired with minimal impact on performance.

Mirroring provides even better data protection.  Any table can be mirrored to a separate, and usually geographically dispersed, cluster for disaster recovery and business continuity purposes.  With mirroring, if there is a catastrophic failure at any site, the HBase application is immediately available at the recovery site, as there are no Region Servers to restart or any need for an HBase Master node to split the recovery logs.  Mirroring can also be used to provide separate access to identical datasets. 

Finally, these data protection capabilities make it possible to avoid planned downtime during routine upgrades.  The cluster can be upgraded a few nodes at a time, resulting in a less disruptive “rolling upgrade” with no loss of data locality because there are no Region Servers to be temporarily displaced from the data.

New Versions of Hadoop and HBase Offer Advantages


By leveraging the power and popularity of Hadoop, HBase has become a capable and scalable database solution for many NoSQL applications.  But like Hadoop, HBase is not without its operational challenges. 

Because these challenges are rooted in how HBase leverages Java and the underlying Hadoop Distributed File System, versions of Hadoop and HBase are now becoming available with re-architected data platforms that eliminate the problems caused by layering.  These unified systems improve performance while making HBase easier to use and administer, and more scalable and reliable.  And together these advantages make HBase suitable for even more NoSQL database applications.