Newsletters




The Data Scene in 2017: More Cloud, Greater Governance, Higher Performance

<< back Page 3 of 4 next >>

Bookmark and Share

However, it’s going to take some time until enterprises embrace the full potential of machine learning. “Right now, machine learning is to the analytics field what big data was 5 years ago,” said Bob Selfridge, CEO and founder of TMMData. “Everyone wants it, but not everyone knows why they want it, or how to make it work for them.” Selfridge sees immediate potential in machine learning-driven modeling because it is “an incredibly powerful way to see the future, act in the present, and react to the past.” The success of this modeling, he added, depends on the “quality, completeness, taxonomy, and governance of data.” That’s why “machine learning is like a new family puppy: Everyone is excited to get it, but a lot of preparation is needed. You must continuously feed it, cleanup is always involved, and lack of training will end in some very unfortunate circumstances.”

DevOps Ensures Continuous Delivery

DevOps—the alignment of development output with operations and scheduling—has been the darling of IT and data management organizations in recent times since it helps deliver ongoing iterations of software under tight and demanding deadlines. “Companies are fed up with the barriers to delivering applications,” said Ashok Reddy, general manager, Mainframe for CA Technologies. “Those barriers include long wait times, not enough skilled resources, opaque processes, siloed development, and inflexibility. The need for better processes and tools must support continuous delivery lifecycles to address these barriers.” The DevOps approach is helping to ensure the creation of continuous integration and delivery of applications, resulting in an important impact on data availability, he continued. “The real value lies in taking the most mission-essential data and making it useful for all business operations—securely and with minimal risk.” However, he cautioned, the move to full-functioning DevOps will take some time, as companies “have not yet committed to shifting tools and processes for fear of disrupting tried-and-true traditional software delivery methods—such as waterfall—or entrenched teams.”

The Power of Collaboration

Machine learning is one side of the coin, and human collaboration is the other.  Andy Youniss, CEO of Rocket Software, for one, sees an ongoing “fusion of collaborative and cognitive computing. Think Google Docs and IBM Watson.” This trend is having a “profound effect on how we work,” he said. “As recently as 2 years ago, we were all trapped in version-control hell, as files were emailed around from person to person and team to team. Today, distributed global teams can collaborate as seamlessly as if they were in the same room. Mix in machine learning and predictive analytics, and projects that used to take years to complete can now be done in days or weeks.” The hurdle, he continued, is being able to manage the massive amounts of data involved. “Compute power must be optimized to handle large, unpredictable loads, and managing bandwidth is absolutely essential.”

Governance Gains Ground

As data lakes have become more popular in recent years, a challenge has been the flow of data into these often untamed repositories. “Lots of data was getting loaded into Hadoop to fill up the data lake, and making it useful was proving incredibly difficult,” said Paul Barth, cofounder and CEO of Podium Data. However, enterprises began to get their arms around this problem over the past year. “Several tools came into their own in 2016 and that turned data swamps into clear, potable data lakes that provide useful, actionable data to business-line users throughout an enterprise,” he observed.

<< back Page 3 of 4 next >>

Sponsors