10 Big Data and IoT Predictions for 2017

Bookmark and Share

What's ahead for 2017 in terms of big data and IoT? IT executives reflect on the impact that Spark, blockchain, data lakes, cognitive computing, AI and machine learning, and other cutting-edge approaches may have on data management and analytics over the year ahead.

  1. Blockchain transforms select financial service applications. In 1998, Nick Szabo wrote a short paper entitled “The God Protocol.” Szabo mused about the creation of a be-all end-all technology protocol, one that designated God the trusted third party in the middle of all transactions. This trust protocol provides a global distributed ledger that changes the way data is stored and transactions are processed. The blockchain runs on computers distributed worldwide where the chains can be viewed by anyone. Transactions are stored in blocks where each block refers to the preceding block, blocks are time-stamped—storing the data in a form that cannot be altered. Hackers find it impossible to hack the blockchain since the world has view of the entire blockchain. Blockchain provides obvious efficiency for consumers. For example, customers won't have to wait for that SWIFT transaction or worry about the impact of a central datacenter leak. For enterprises, blockchain presents a cost savings and opportunity for competitive advantage. In 2017, there will be select, transformational use cases in financial services that emerge with broad implications for the way data is stored and transactions processed. - John Schroeder, executive chairman and founder, MapR
  2.  C-level executives will assume a larger role in the data center’s success in 2017. This will occur as both data centers and IT strategies continue to have a larger impact on an enterprise’s bottom line—a trend we saw in 2016, evidenced by companies losing millions following data center outages.  To take on a larger role in data center success, the C-suite will have to learn to speak the same language as IT, understand the issues that IT routinely faces, budget constraints, and the necessity of advanced tools like automation within the IT environment—all with the goal of helping to address IT’s needs. Equipped, for the first time, of a deeper knowledge base and operational understanding of the data center and how IT teams operate, the C-suite will be in a position to help improve a wide array of issues, including disaster recovery strategies, business continuity, ways to automate manual or tedious processes, and green data center initiatives. Jeff Klaus, GM, Data Center Solutions, Intel 
  3. Big data and IoT systems will evolve in 2017 to help businesses prosper during uncertain times in five ways. Self-service data prep will unlock big data’s full value; organizations will replace self-service reporting with embedded analytics; IoT’s adoption and convergence with big data will make automated data onboarding a requirement; 2017’s early adopters of AI and machine learning in analytics will gain a huge first-mover advantage in the digitalization of business; and cybersecurity will be the most prominent big data use case.  Quentin Gallivan, CEO of Pentaho
  4. Spark still isn’t played out as a technology. Spark will evolve into something that is quite different from what it is today. They will have to address integration with persistent storage and sharing. - Michael Stonebraker, co-founder and CTO of Tamr, and recipient of the 2014 A.M. Turing Award
  5. The Internet of Things architect role will eclipse the data scientist as the most valuable unicorn for HR departments. The surge in IoT will produce a surge in edge computing and IoT operational design. Thousands of resumes will be updated overnight. Additionally, fewer than 10% of companies realize they need an IoT analytics architect, a distinct species from IoT System Architect. Software architects who can design both distributed and central analytics for IoT will soar in value. Dan Graham, Internet of Things technical marketing specialist, Teradata
  6. The cognitive era of computing will make its debut. The cognitive era of computing will make it possible to converge artificial intelligence, business intelligence, machine learning and real-time analytics in various ways that will make real-time intelligence a reality. Such “speed of thought” analyses would not be possible were it not for the unprecedented performance afforded by hardware acceleration of in-memory data stores. By delivering extraordinary performance without the need to define a schema or index in advance, GPU acceleration provides the ability to perform exploratory analytics that will be required for cognitive computing. Eric Mizell, VP of global solutions engineering, Kinetica
  7. Metadata makes a management move. Given that the amount of data in the world doubles every 2 years, data needs to be managed much more effectively on the enterprise end of the data consumption chain. Metadata is the data about our data, such as when a file was created, when it was last opened, what application uses it, and so forth. As IT begins to be able to finally see exactly what data is cold, and what is hot, it will get much easier to align data to the different storage resources available to enterprises today. The result will be much less overspending and much more optimization for companies who begin to manage by the intelligence available in their metadata. Lance Smith, CEO of Primary Data
  8. In 2017, organizations will stop letting data lakes be their proverbial ball and chain. Centralized data stores still have a place in initiatives of the future: How else can you compare current data with historical data to identify trends and patterns? Yet, relying solely on a centralized data strategy will ensure data weighs you down. Rather than a data lake-focused approach, organizations will begin to shift the bulk of their investments to implementing solutions that enable data to be utilized where it’s generated and where business process occur—at the edge. In years to come, this shift will be understood as especially prescient, now that edge analytics and distributed strategies are becoming increasingly important parts of deriving value from data. - Adam Wray, CEO of Basho
  9. 2017 will be the year organizations begin to rekindle trust in their data lakes. The “dump it in the data lake” mentality compromises analysis and sows distrust in the data. With so many new and evolving data sources like sensors and connected devices, organizations must be vigilant about the integrity of their data and expect and plan for regular, unanticipated changes to the format of their incoming data. Next year, organizations will begin to change their mindset and look for ways to constantly monitor and sanitize data as it arrives, before it reaches its destination. Girish Pancha, CEO and founder, StreamSets
  10. Digital transformation drives next wave of cloud. Enterprise-level internal resources including business-critical applications are now being moved to the cloud. This is a new development as internal-facing applications are traditionally kept internal. The challenge with migrating old systems and applications to a newer encrypted approach is that the network capabilities can be stretched thin or become too fragile. This ultimately creates complexities tied to application planning, performance monitoring and final migration to the cloud. Digital transformation isn’t a fad and we expect to see the migration of critical applications to the cloud increase in 2017 across all markets. Large enterprise clouds are now being adopted beyond just customer-facing resources like e-commerce websites. Having visibility is important in order to understand how things are being influenced as organizations change the digital landscape. Cloud-only and internet-only transport are the future as they allow enterprise organizations to become more nimble and agile, while also providing cost savings. Sean Applegate, senior director, technology strategist at Riverbed