Success with Big Data: Key Observations from Data Summit 2016 Presentations

By now it is well known that the value of big data comes from its variety, volume, and velocity. But, with the increase in data sources, data types, and data management platforms new obstacles can also appear, creating challenges in combining data for valuable insights.

During educational presentations on industry trends and technologies, keynotes, discussions, and hands-on workshops at Data Summit 2016, the combination of philosophies and technical approaches that can help organizations be successful was addressed.

Here are 9 observations made by Data Summit 2016 presenters about what to keep in mind when embarking on initiatives to put data to work.

  1. Co-located data is not the same as integrated data: It’s a red flag when you go into a company and the number-one thing they say is, “Let’s get our data in one place, then we can do something.”  The idea that co-located data automatically becomes integrated is false.
    Also - organizations cannot expect the best results from data management if there are two different sets of goals in mind by two sides of the business. While business leaders want to jump into projects, sometimes throwing caution to the wind, IT personnel take into account how to do more with less and cutting costs. – Anne Buff, business solutions manager and thought leader for SAS Best Practices
  2. Keep it simple: There are lots of new terms and technologies, but it doesn't need to be complicated. “Polyglot persistence,” for instance, has become a neologism in big data but it simply means selecting the best technology for the problem. And, remember that NoSQL systems don't replace relational systems – they augment them. – Craig S. Mullins, principal of Mullins Consulting
  3. Relieving the DevOps data constraint: Creating, distributing, and managing test data has become a bottleneck in the increasingly fast world of agile and DevOps. Twenty percent of software development lifecycle (SDLC) is lost waiting for data, and 60% of dev/test time is consumed by data tasks.  One way to manage test data management is with virtual data. – Kyle Hailey, technical evangelist at Delphix
  4. Understand how cloud can help: Cloud has changed the equation and introduced the ability to handle large datasets. Cloud allows us to take petabytes of data and analyze it while 15 years ago, it would not be possible to do this kind of analysis unless you had highly skilled data experts. Cloud as a whole is breaking down barriers. - Kalev Hannes Leetaru, Forbes columnist and founder of the GDELT Project
  5. Read the fine print: Buying cloud services by the drink is fine until everybody decides to have a drink - not everything should be bought by the drink. Examine costs carefully and read the fine print on cloud contracts. – Michael J Corey, president, Ntirety – A HOSTING Company; and Don Sullivan, system engineer database specialist, VMware
  6. Data as a valuable asset: Increasingly, data is being recognized and appreciated as an asset, and even a kind of capital and needs to be treated as such. Think of Airbnb’s rental business, Uber’s surge pricing, and Alibaba’s online marketplace.  - Nick Chandra, vice president of Cloud Customer Success at Oracle
  7. If you don't know anything about Blockchain, do yourself a favor and do some research now. Charles Pack, technical director, CSX Technology (with credit to Chris, Anant, Kuassi, Madhu; Coleman & the IOUG Team).
  8. Data virtualization tackles the problem of proliferation: Data virtualization can reduce unnecessary copies, the root of data proliferation. - BJ Fesq, chief architect and chief data officer of CIT Group
  9. Deal with data governance during the data discovery process: To begin the governing process enterprises must decide on a definition of ownership and set up ongoing monitoring of the discovery methods. Enterprises can tap into a variety of strategies, with enabling highly iterative self-sufficient IT along with utilizing intuitive visualization tools. - John O’Brien, principal analyst and CEO, at Radiant Advisors
  10. The value of in-chip analytics: In-chip analytics is simpler for business users and IT, enabling ad hoc data mashups faster and at greater scale more value from data. – Jeremy Sokolic, vice president, product, at Sisense

Many presentations from Data Summit 2016 have been made available for download at 

MAY 16 - 17, 2017


Related Articles

Gaining More Value from Big Data - 6 Key Takeaways from Data Summit 2016

Posted June 08, 2016

Keynote at Data Summit 2016 Looks to the Future of Big Data and its Technologies

Posted May 26, 2016