Considering Scale-Out Architectures to Avoid the “Big Squeeze”

In his presentation at the Strata + Hadoop World conference, titled “Unseating the Giants: How Big Data is Causing Big Problems for Traditional RDBMSs,” Monte Zweben, CEO and co-founder of Splice Machine, addressed the topic of scale-up architectures as exemplified by traditional RDBMS technologies versus scale-out architectures, exemplified by SQL on Hadoop, NoSQL and New SQL solutions.

Zweben said the problem is that many companies are facing the “big squeeze” with big data, created by IT budgets that are relatively flat, growing by only 3% to 4% a year, versus data growth that is averaging 30% to 40%, and a consensus that data is a valuable commodity that cannot be thrown away.

While showcasing examples of companies that have gained benefit by selecting a scale-out architecture, Zweben outlined what he sees are the the top reasons to choose a scale-out versus scale-up.

The top considerations in considering scale-up is whether the company has the ability to afford them, the need to maintain custom coding, their proven reliability, the avoidance of risk that comes with newer technologies, and the fact that less migration is required.  

On the other hand, by choosing scale-out architectures, he said, companies can reduce costs by 4x to 10x, increase performance by 3x to 10x, improve scalability, support flexible schemas, and also access a growing ecosystem of open source tools. 

For more information on Splice Machine, go to

Related Articles

Splice Machine today announced the general availability of its Hadoop RDBMS, a platform to build real-time, scalable applications, that incorporates new features that emerged from charter customers using the the beta offering. With the additional new features and the validation from beta customers, Splice Machine 1.0 can support enterprises struggling with their existing databases and seeking to scale-out affordably, said Monte Zweben, co-founder and CEO, Splice Machine.

Posted November 19, 2014