Accessing the Pipeline for Cross-Data Analytics

The continual evolution in technology has allowed for more data sources than previously thought possible. The growth of SaaS tools provides many benefits, but there is a downside as well. Bringing these cloud data sources into a coherent system for reporting is perpetually a challenge for IT and business intelligence teams. A recent DBTA roundtable webcast covered the issues of combining different SaaS applications into a cloud based enterprise data and leveraging the simple data pipe. Presenters included with Sarah Maston, solution architect with IBM Cloud Data Services, and Erin Franz, alliances data analyst with Looker.

Creating a data warehouse can be a daunting process. An operational data store (ODS) is a basic warehousing option that allows users to centralize their different types of data in one consolidated location. A problem arose for IBM Cloud Data Services when it wanted to build a data warehouse. “Writing to a REST API and receiving JSON data is where this gets a little difficult,” stated Maston. The answer was the simple data pipe which “is a tool that quickly automates the movement of the data for your architectural goals.”

Looker does not move a user’s data. The data stays in the analytics database. Looker uses a modeling language called LookerML, which transforms the data at query time. “This avoids some complex ETL processes up front,” noted Franz. It is able to connect to databases that can write SQL, transform the data using LookerML, and allow users to explore their datasets. Some of the advantages that Looker, combined with dashDB, provide are dashboards and reports for sales teams, conversion rates at each step of the sales funnel, and the ability to create and curate custom data so that everyone in the business can explore the data that is pertinent to them.

To view a replay of this webinar, go here.


Subscribe to Big Data Quarterly E-Edition