Newsletters




Uncovering the 'Blind Spot' in Your Data Strategy

<< back Page 2 of 3 next >>

Bookmark and Share

This next-generation of analytic software is called Information Optimization, and it is greatly expanding the traditional business analytics landscape. It links these diverse data types with traditional decision-making tools like spreadsheets and business intelligence (BI) systems to offer a richer decision making capability than previously possible.

Information Optimization systems start by ingesting semi-structured and unstructured content, from varied sources throughout an organization, and mapping these sources to models so that they can be combined, restructured and analyzed.

This capability has proven invaluable to organizations like Audi Japan who was struggling with how to integrate several SAP systems with other information contained in reports and spreadsheets in their finance organization.  Using the technology Audi was able to extract information from the structured sources contained in their databases and blend it with information harvested from reports they were receiving from the various SAP systems throughout their organization. 

This new capability enabled them to get for first time a complete view of the financial health of the business without a tremendous amount of error prone manual intervention.  It has allowed them to more effectively manage the business and react more quickly to changes in the markets.  Through the use of information optimization and visualization technologies Audi estimates that they have reduced end of cycle reporting by almost 70% and reallocated valuable manpower to more important tasks within the financial operations.

While it sounds simple, the technology actually requires significant intelligence of the structural components of the content types to be ingested and the ability to break these down into “atomic level” items that can be combined and mapped together in different ways.

The process is made more complex by making it easier for the user. Information Optimization systems are seldom stand alone, but rather work in tandem with ECM systems (to access and utilize information housed there) and visual BI systems (to allow unstructured, semi-structured and structured sources to be linked together for enhanced analytics).

The final requirements of Information Optimizations are their ability to work in high-volume, high-velocity environments, and deliver the analytics in rich visual discovery models.

Information Optimization tools typically employ a three-stage model: Transform - Distribute - Optimize

In the Transform stage, data stored in structured sources like relational databases are combined with unstructured and semi-structured data sources to feed a complete view of the business. In addition to internal information sources mentioned above, a lot of information can come from outside the organization in sources like invoices, purchase orders, partners’ reports and PDFs.

Accomplishing this requires a visual mapping capability that allows users to identify the characteristics of how to extract the information needed from the source they are working with and transform it into an analytical data set.  Of course, transforming these less-than-structured sources of information requires some kind an intelligence “under the hood” that can determine the most effective methods for identifying and extracting this information.  These extraction models can be highly elaborate incorporating multiple passes at a source, blending multiple sources, and incorporating in logical and mathematical operations to derive new data from infested content.

Once the structured, semi-structured and unstructured information sources are modeled, these processes need to be repeatable and dynamic so that information can be shared across the organization faster and more efficiently. In the Distribute stage, businesses move the information into high-performance production environments that allow users to process large amounts of data, as well as schedule and automate these processes.

As organizations move into this phase of the process a high performance server environment becomes critical. The server should allow organizations to deliver any data to their users for the models to be stored along with their sources.  This means that this information be easily shared across an entire organization through web browsers. 

<< back Page 2 of 3 next >>

Related Articles

While unstructured data may represent one of the greatest opportunities of the big data revolution, it is one of its most perplexing challenges. In many ways, the very core of big data is that it is unstructured, and many enterprises are not yet equipped to handle this kind of information in an organized, systematic way. Most of the world's enterprise databases—based on a model designed in the 1970s and 1980s that served enterprises well in the decades since—suddenly seem out-of-date, and clunky at best when it comes to managing and storing unstructured data. However, insights from these disparate data types—including weblog, social media, documents, image, text, and graphical files—are increasingly being sought by the business.

Posted July 30, 2013

Today, businesses are ending up with more and more critical dependency on their data infrastructure. If underlying database systems are not available, manufacturing floors cannot operate, stock exchanges cannot trade, retail stores cannot sell, banks cannot serve customers, mobile phone users cannot place calls, stadiums cannot host sports games, gyms cannot verify their subscribers' identity. Here is a look at some of the trends and how they are going to impact data management professionals.

Posted March 17, 2014

TEKsystems, an IT staffing solutions provider, says that employers are finding it increasingly difficult to hire business intelligence and security experts.

Posted April 11, 2014

Sponsors