Newsletters




The Database Testing Blues


Within IT, testing has matured as an industry. Many tools exist, and many IT shops have testing groups. But, often those testing groups are unable to assist on data-related projects.

The heart of the problem is that the focus of the testing practice has been perverted. The testing industry is concerned primarily with validating code, specifically the kinds of code that interact with people, like the functions and procedures that generate websites or apps; ultimately, any time a form is created to be filled in, what is displayed, entered, and responded can be tested.

The testing tools record keystrokes, record expected responses, then compare new results to expected results. Those automated steps can be executed again and again on demand, even in large concurrent volumes to test bandwidth.

For building data marts, data lakes, data warehouses, and similar data-focused endeavors, the goal is the populated data structures inside databases. Users may access this data in many ways, but those interactions are not what needs to be tested. With these data-related projects, the need is to test the accuracy of the data at rest within the structures. Not having any user interactions to validate them leaves most testing groups confused. How can the idea of not having an interactive application be conceived? The answer is often amazingly simple—the testing personnel do not relate and cannot cope. Sadly, that means that testing options outside of the testing groups must be used.

Because of this cognitive dissonance, the official testers often are removed from the verification of data solutions. Instead, data analysts are pressed into service as the testing corps to approve the solution’s implementation. Unlike testers who seem able to understand only the functionality of their application testing tools, data analysts understand data and have the necessary skills to explore the content within the database.

Often a first activity entails data profiling. Each data structure can be reviewed. Simply gathering table row counts is a start. Object counts can be verified back to expected numbers based on sources. Next, each column or specified important columns, if dealing with too many columns and too few testing resources, can have their value distributions reviewed. Some number of specific rows can be checked thoroughly to ensure that all columns match up to expectations by comparing resultant values to their source. Foreign keys that link tables together can be checked; do expected joins return the expected number of rows? At the end of these checks, some level of confidence about the correctness of the overall data population should be gained.

Ideally, a few choice reports can be executed and compared to older solution versions. This may be accomplished directly if a report does translate one-for-one between an old solution and the new solution, or indirectly by comparing grand totals and sub-totals within control-breaks for critical organizational metrics, down to some meaningful levels. Same data, same point-in-time should provide results that are the same between the two. Differences can be explored. Do not be discouraged by initial discrepancies, as it is not unheard of for further discovery to uncover that an older, original report may have always had errors that are now corrected in the new solution.

Using reports or high-level metrics in this way, while helpful in providing due diligence, only provide partial coverage. Therefore, it is not recommended that this report/metric-totals-based approach be used exclusively. There are many data points that may not be covered by a sampling of critical reports, and those other data points need to be verified too. Maybe someday in the future, testing departments might learn to understand stand-alone data in addition to interactive applications or expand to include data analysts in order that both data-in-motion and data-at-rest may be wholistically tested.


Sponsors