Newsletters




Strategies for Overcoming Big Data Integration Challenges at Data Summit 2018


Bookmark and Share

With the rise of big data, there is the need to leverage a wider variety of data sources as quickly as possible for real-time decision making in mission-critical environments.

Presentations at Data Summit 2018 showcased real-world scenarios where data integration is providing value.

At Data Summit, Joseph deBuzna, VP, Field Engineering, HVR, showed how HVR helped a global financial services company that needed to architect a cloud-based trading data analytics platform.

This technical presentation, titled “Data Acquisition to Support Trading Data Analytics,” showed how, using HVR, the customer architected continuous data feeds using data integration technology so that it could enable real-time data analytics for best execution.

The customer’s cloud-based trading data analytics platform that leverages HVR as a key, real-time data ingestion tool.

The challenge was that the company needed real-time data analytics to aid data driven decision support and insight, and sought to achieve high data availability, decreased data prep time, high data quality and consistency, and advanced analytics capabilities. A data pipeline was implemented with ingestion of structured data from OLTP, data storage and access, data federation, data security, and enterprise data governance, with all data in one place exposed to a variety of users through services.

Kevin Scott, principal sales engineer, CloverETL, continued the data integration conversation in a session, “Automating Data Architecture Design,” in which he demonstrated how a data integration platform can also be as valuable developing an architecture as it is in operating that architecture.  Data integration platforms are traditionally called upon to operate a data architecture, connecting, transforming, and publishing data. But the right data integration platforms are also useful while developing a data architecture for the process of identifying and characterizing the data, modeling the data structures in an architecture, and testing the architecture with real data and real use cases.

Scott showcased real use cases in securities trading/risk assessment and small banking systems.

In a trade risk group at a large financial institution, the customer used CloverETL in production to automate regular, fetch, transform, analysis, delivery pipeline, and also in development for transforming the data model documents into executable code. 

A small bank also uses CloverETL in production to automate regular exports of core bank data for internal and external reporting. In development it creates anonymized data test datasets for validating change to the bank’s software data architecture and implementation.

Data Summit 2019, presented by DBTA and Big Data Quarterly, is tentatively scheduled for May 21-22, 2019, at the Hyatt Regency Boston with pre-conference workshops on May 20.

Many presentations from Data Summit 2018 have been made available for review at www.dbta.com/DataSummit/2018/Presentations.aspx.


Sponsors