Newsletters




Data Integration: From Dark Art to Enterprise Architecture

<< back Page 2 of 2

Page 2

In addition, in these environments, projects are highly siloed, with little to share or pass on to any other projects taking place within the enterprise. Such a scenario can be disastrous—and extremely costly—in the event of a merger or acquisition, in which two sets of corporate data need to be brought together. The net result of such environments is multiple, unconnected data integration platforms within the same organization—each with its own cache of hand-coded scripts, or even hand-coded connectors from individual applications.

The ultimate goal is to have data integration automatically taking place behind the scenes, every time a new data source is identified, or a new user interface is requested. At the front end, a new application or user interface can be automatically supported with intervention from the IT department.

Some even refer to this as “one-click” data integration. In an ideal state, data integration is continuous and available across the enterprise, via standardized and reusable data services and metadata. To get there, data integration needs to be part of a systematized, architecturally designed process—emphasizing repeatability, continuous improvement and quality. For any organization that is facing the big data challenge—with terabytes or petabytes of structured, unstructured or semi-structured data moving through the business—data integration needs to addressed as a repeatable and automated process.


For more articles on this topic,access the special section "DBTA Best Practices: Data Integration, Master Data Management, and Data Virtualization"


The following are key steps to achieving more rapid data integration as part of a repeatable enterprise architecture approach:

Data Virtualization: Moving data to an abstracted services layer will ensure that it can be accessible to everyone across the enterprise. Just as IT assets are now offered through service layers via software as a service, or platform as a service, information can be available through a “data as a service” approach.

Because high-speed algorithms and APIs can be employed, data virtualization enables the enterprises to serve up actionable information to decision makers and consuming applications more rapidly and seamlessly than traditional data integration solutions, which have been dependent on ETL approaches or database consolidation. Plus, it can abstract and support a range of newer or traditional data integration interfaces, ranging from data warehouses, data marts, appliances, cloud and data federation deployments. As an added bonus, there is no pressing requirement for physical data consolidation. With data virtualized through a services layer, there is less of a need for data transformation, replication, or movement.

On the end user side of the equation, it won’t matter what type of client device is being used to access the data —whether it’s a smartphone, tablet, PC, or other device. In addition, front-end applications—such as BI tools—can also access virtualized data through standard industry interfaces. Data virtualization provides a single, consistent architecture, and also reduces the number of interfaces the enterprise is required to support.

Master Data Management (MDM): Often, within enterprises, there is friction between departments or delayed reporting because one group may have different data sets or analysis results than another. As this occurs, it is necessary to sit down and hash out discrepancies manually. With MDM, the enterprise secures a core set of data, often referred to as single master file or “gold copy” of reference data. An effective MDM program helps address issues that arise when there are multiple systems handling overlapping data. MDM helps centralize the data into a single view, versus a fragmented view of customers from across the enterprise.

Data Integration Automation: To reduce the latency and bottlenecks that result from manual coding and scripting, automate as much of the data integration repetitive data integration tasks such as monitoring and testing are ideal areas to start an automation initiative. Other areas also may increasingly be automated as enterprises move to data integration toolsets with prepackaged features and functionality. This dramatically accelerates the process of moving information from sources and out to decision makers and applications, and delivers enhanced productivity to both IT staff and business end users.

 

<< back Page 2 of 2

Sponsors