Hortonworks Data Cloud for AWS Now Available

Bookmark and Share

Expanding the capabilities for customers to take advantage of the elasticity of Apache Hadoop and Apache Spark in the cloud to power new workloads and analytic applications, Hortonworks, Inc. has announced the availability of Hortonworks Data Cloud on the AWS Cloud.

“We are announcing a new product offering called Hortonworks Data Cloud for AWS, which will be available from the Amazon Web Services Marketplace," said Shaun Connolly, chief strategy officer, Hortonworks.  Hortonworks on AWS is directed specifically at doing more to serve customers that have a strong preference for Amazon Web Services, providing an offering that is built natively for AWS with both hourly and annual billing options available, he noted.

According to Connolly, there are three related megatrends in the IT industry—cloud computing, the Internet of Things, and big data and analytics—and all three are being fueled by the increased demand for real-time decision making, gaining a competitive advantage and managing costs better.

Hortonworks has been in the cloud for years with the Azure HDInsight, said Connolly. The new Hortonworks Data Cloud for AWS is powered by the Hortonworks Data Platform services, much like HDInsight is but it is more focused on the AWS environment and providing a user experience that offers developers and data scientists who deploy Apache Hive and Apache Spark environments quickly within their AWS account with a fast path to doing their jobs and analyzing data.

The new AWS offering continues Hortonworks’ connected data architecture approach that helps to enable agility, elasticity, cost, and real-time analytics. With Hortonworks Data Cloud for AWS, the company says, businesses can achieve insight into data faster and with greater flexibility than was previously possible.

More than 25% of Hortonworks existing customers are already using Hortonworks solutions today in public clouds whether that is Azure or AWS, said Connolly. This new product offering, however, is aimed directly at what he calls the “rinse-and-repeat use cases,” which are “ephemeral, self-service use cases” for data science and exploration, and data preparation use cases, where users typically boot up an environment, do their work, and then shut it down.

In particular, the hourly pricing model enables strong cost control, and the offering is also designed to integrate with AWS services such as Amazon S3, RDS, and EC2.

In addition, it is a highly-prescriptive experience configured and pre-tuned for the most popular use cases (data science and exploration, data preparation and ETL, data analytics and reporting), enabling data scientists, developers and end users to be more productive with more time to focus on processing and deriving value from data and less time configuring and operating data platform infrastructure.

For more information, go to