Today, data is driving business success. Downtime is a killer and even slow response time can be hazardous to business health. But how do organizations achieve the performance and speed that is required today amid growing complexity and rapid data growth?
Recently, Nihal Mirashi, a principal product marketing manager at Pure Storage, and Joe McKendrick, lead analyst for Unisphere Research and a contributing editor for Database Trends and Applications, discussed how organizations are revising their game plans for business agility and resilience and the role that the right storage solution can play. The conversation, recorded for a Pure Report podcast, was moderated by Rob Ludeman, director of solutions marketing at Pure Storage.
The wide-ranging conversation spanned the need for the requirements for high availability, the rise of open source databases, the role of cloud, the push for digital transformation, and the key capabilities that organizations need to look for in a technology partner in order to meet their data-driven goals.
Slow is the New Down
If time is money, slow is the new down, according to Mirashi and McKendrick. Years ago, said McKendrick, he toured a converted factory that had been made into a disaster recovery center outfitted with cots and supplies to sustain workers while they restored data from backup tapes to get their companies back up and running after a catastrophic data loss. Fast-forward to today, and data recovery must be immediate. The estimated cost of downtime increases every year, and even slow response times can be devastating. “The technology has evolved to where things need to be instantaneous,” said Ludeman. Agreeing, McKendrick cited a Unisphere Research survey in which data professionals were asked how long they would wait on an ecommerce site before switching to another. Their answer, he said, was no more than about 7-10 seconds, even for data professionals who understand better than anyone all the work that goes on behind the scenes.
Open Source is Going Mainstream
Another big change in recent years has been the disruption caused by open source databases, said Ludeman, as they have moved from test and development scenarios to mission-critical production environments. Many of these open source databases are less expensive in terms of licensing, are scalable, well-suited to dealing with unstructured data, and often offer faster time-to-market value. And while they are not free by any means, they are often more lightweight, flexible, and supportive of experimentation than other more traditional options.
“We are seeing the use of open source databases in a variety of use cases and industries,” said Mirashi. Financial services and fraud detection is a common use case, said Mirashi, explaining, for example, that there are financial services firms that have built their own fraud detection systems and, on the back end, they have open source databases because of the huge growth in data volumes, with a lot of that data unstructured.
“It’s the old adage of the right tool for the right job,” said Ludeman. “Open source databases are here to stay and they are mainstream,” added Mirashi, while emphasizing most companies have a mixture of open source and commercial databases. “What it all comes down to being able to support real-time decisions.”
DevOps and DataOps for Agility
With agility becoming a key priority for many organizations, DevOps and DataOps are also gaining ground, McKendrick pointed out. With DevOps, development teams work closely with the operations teams to coordinate processes for continuous delivery of software, while DataOps helps to ensure the flow of data across the organization in support of the democratization of analytics.