February 2017 - UPDATE

Subscribe to the online version of Database Trends and Applications magazine. DBTA will send occasional notices about new and/or updated DBTA.com content.

Trends and Applications

Today's successful organizations are data-driven, and many are building, maintaining, and accessing databases that scale well beyond the terabyte range. In fact, many have total data assets that now measure in the petabytes. But it's not just the size of databases that is expanding.

No one solution can be all things to all enterprises, and as enterprises began to deploy vSAN across their environments, they noticed a big thing was missing. Despite its many benefits, vSAN lacks support for a file system. The importance of having a file system within a data center cannot be overstated. Without a file system, the guest VMs (virtual machines) cannot share files between them and are forced to use an external NAS (network attached storage) solution as shared storage. Without a file system overlaying this data, it becomes laborious and impossible to scale efficiently.

The desire to compete on analytics is driving the adoption of new technologies such as NoSQL and Hadoop to store and process large volumes of data. Jamie Morgan, senior solutions architect at HPE Security - Data Security, and Kevin Petrie, senior director and technology evangelist at Attunity, recently discussed key technologies and best practices for data integration, data quality, and data security during a DBTA roundtable webinar.

Data centers are now designed to not only accommodate current regulations and standards but also predicted capacity requirements, while simultaneously reducing operating costs by incorporating aisle containment structures. One solution offered by many experts in the field is the use of scalable or modular designs. The aisle containment structure is gaining popularity because it meets these goals by providing an infrastructure that is repeatable and rapidly deployable.

Apache Spark offers a solid foundation for machine learning. There are other tools and packages to help you dive into deep learning, but Spark offers a consistent approach to data access and, therefore, makes machine learning on Spark easier as you need less plumbing.