The unprecedented volumes of data now pouring forth from virtualization technologies and cloud-based infrastructure have taxed traditional storage architectures beyond their limits. Organizations want to store their data in an efficient and affordable way that is also flexible. Software-defined storage (SDS) meets these criteria and, therefore, is enjoying a surge in adoption.
In fact, the SDS market will have revenues of nearly $16.2 billion in 2021 as it grows at 13.5 percent over the 2017-2021 forecast period set by IDC. Enterprise storage spending is shifting from on-premises IT infrastructure toward private, public and hybrid cloud environments and from hardware-defined, dual-controller array designs toward SDS.
However, caveat emptor. Like any other tool, SDS approaches can vary wildly. It’s important to make sure that your choice can handle your enterprise workloads before making the switch. Here are some things to look for.
What SDS Offers
The SDS approach has many benefits when compared to traditional storage architectures. They can run on commercial, off-the-shelf hardware while delivering better and faster functionality, such as provisioning and de-duplication, via software. SDS offers easier and more intuitive autonomous storage management capabilities that lower administrative costs, offer greater agility and reduced expenditure due to the lower-cost hardware.
Those tasked with choosing an SDS option for their organizations could end up with a lot of marketing hype and less-than-satisfactory options without market education on current software-defined storage capabilities. So, here’s guidance to educate and inform your choices as you look for a truly versatile, cost-efficient, unified approach.
A Cohesive SDS Approach
What does SDS look like? It’s a hardware-agnostic platform that is horizontally aligned and fully flash compliant. It enables the kind of flexibility and performance that is critical to the future of storage.
Below are important features to look for in SDS:
File features: Freeware is often the basis of the file systems that SDS providers offer. They exclude some important features most Windows users are used to. Therefore, thoroughly vet the file-related features you are being offered; make sure they include snapshot, quota, antivirus, encryption and tiering.
Unified storage: Object storage is the latest media darling. It is used for machine-to-machine/IoT transactions and other applications that require extreme scalability and has no performance requirements. However, it isn’t good at managing unstructured data. That’s why you need file storage to have a truly useful approach. However, you need block and object store as well for a truly unified approach.
Network-attached storage (NAS): It is very important to have consistency in a scale-out NAS, meaning files are accessible from all nodes at the same time. Look for consistency in SDS approaches as part of your research.
Hyper-converged capability: The scale-out NAS needs to be able to run as hyper-converged because hybrid cloud solutions, of course, require support for hypervisors.
Storing metadata: Metadata are bits of information that describe the structure of the file system, and they are a very important piece of this virtual system. For example, one metadata file can contain information about what files and folders are contained in a single folder in the file system. That means you will have one metadata file for each folder in your virtual file system. As the virtual file system grows, you will get more and more metadata files. Make sure your prospective choice’s storage layer is based on object store so that you can store all your metadata there. This will ensure good scalability, performance and availability.
Sharing: Organizations that use a hybrid cloud approach likely have different office locations that need both a private area and an area that they share with other branches. So then, only parts of the file system will be shared with others. Selecting a section of a file system and letting others mount it at any given point in the other file systems provides the flexibility needed to scale the file system – making sure that the synchronization is made at the file system level in order to have a consistent view of the file system across sites. Being able to specify different file encodings at different sites is useful, for example, if one site is used as a backup target.
Caching: Performance remains important in the storage world. To increase performance, SDS needs caching devices. It is also important to protect the data at a higher level by replicating the data to another node before de-staging the data from the cache to the storage layer.
Time, tide and data wait for no one. Organizations realize that they must embrace new storage methods or risk extinction. Traditional architectures are just too expensive and inflexible to meet today’s demands. Software-defined storage is a winning alternative, yet the options on the market vary considerably and require close vetting. The guidelines above will help you make an informed decision that will result in your organization’s current and future storage needs being fully addressed.