Traditional architecture and technologies and newer big data approaches each offer advantages. In a session at Data Summit 2019, titled “Designing a Data Architecture for Modern Business Intelligence & Analytics,” Richard Sherman, managing partner, Athena IT Solutions, looked at the current state of analytics and what needs to change.
The current state of Current State of BI & Analytics is that work is being done in silos, and there are lots of spreadsheets still being used an d lots of data shadow systems, said Sherman. This has expanded to include discovery tools, data preparation tools, cloud applications, big data applications, in addition to spreadsheets, creating a complicated accidental architecture for data and BI.
One-Size-Fits-All Trap: Common mistakes that many organizations make is falling into the trap that one use case fits all, one size (tool) fits all, and there can be one neck (vendor) to choke.
Technology Trap: Another mistake is the technology trap, and believing the hype that each new generation of BI tool is: ? easier, faster, requires less technical knowledge, addresses the last generation’s BI challenges. Each generation has the same marketing lifecycle offering products that can overcome, said Sherman.
Cultural Trap: People continue to work in business silos and prefer to stay in their own comfort zone, said Sherman. People need to get things done and the great majority are still using spreadsheets.
Analytical Data Architecture (ADA)
According to Sherman, an enterprise’s analytical data architecture (ADA) needs to implement the integration & analytical requirements of a information architecture.
An enterprise’s analytical data architecture needs to implement the integration and analytical requirements of an information architecture, and includes:
- Data Schemas & Models
- Data Integration & Workflow
- Policies, Processes & Standards
- Organization, People, Skills & Politics
- Technology Architecture
- Product Architecture
Kevin Petrie, who is in marketing at Attunity, a division of Qlik, added to the discussion on data integration, explaining that the right architecture is needed for efficiently capturing large volumes of changed data from heterogeneous source systems and delivering it in real-time to streaming and cloud platforms, data warehouses, and data lakes. Important features for data pipeline automation include automated creation of tables, organization of data structures, and tracking of lineage to support a managed data lake; keeping data in sync; and continuous real-time data replication in the managed data lake.
Many presenters are making their slide decks available on the Data Summit 2019 website at www.dbta.com/DataSummit/2019/Presentations.aspx.