New Approaches to Application Quality
All of us have experienced software application faults or “bugs.” For the most part, these are the trivial annoyances caused by bugs in smartphone applications or time-wasting errors in desktop programs such as Microsoft Office.
But, since software applications are integrated into virtually all aspects of modern life, application quality flaws can have potentially devastating consequences. Software glitches have crashed rockets (Ariane 5), overdosed patients with radiation, crashed critical online sites including Google, and resulted in millions of dollars of financial miscalculations.
The process of removing quality errors from computer programs emerged in the very earliest days of digital computing. When engineers using an early Harvard computer in 1945 noticed errors in calculations, they eventually found that a dead moth was preventing the proper operation of the computer’s relay. Thus started the now traditional “debugging” stage of application development.
The earliest attempts to attack the issue of application program bugs borrowed from the quality assurance process for manufacturing. At the end of a production line, products are tested for faults, and the object is rejected if any faults are found. In the worst case, design errors can result in the failure of an entire production line.
This “test it when it’s built” approach seemed sensible and logical in the early days of software development, but rarely resulted in a satisfactory outcome. At first, the problem seemed to be one of coverage – as software applications became more complex, it became harder and harder to manually test every set of inputs, and validate outputs. So, achieving “satisfactory test coverage” was a typical ambition of the software testers of the ‘70s and ‘80s.
The superficial problems of test coverage seemed to be overcome with the advent of automated software testing. Tests were constructed using software tools that could rapidly simulate many different inputs and user interactions, and individual program elements were associated with their own test cases that could be run at any time to ensure that no regressions – bugs in previously working components – occurred.
Indeed, software test automation resulted in significant improvements in base application quality, but still failed to avoid many significant application failures. It simply was not practical to anticipate all possible scenarios and automate them all – end users tended to take surprising routes through the software, resulting in new problems. Furthermore, testing of every individual component often failed to validate the really significant product quality issues – such as, does the product provide actual value to the user?
Finally, as best practice in software development moved from the distinct “waterfall” phases to a more iterative and agile methodology, the idea of testing at the end became obsolete.
As a result of these forces, two significant new approaches to application quality emerged: exploratory testing and risk-based testing.
Risk-based testing – pioneered in particular by Rex Black – encourages quality control to identify and focus on the riskiest parts of the application. Rather than trying to cover all scenarios, risk-based testing encourages quality assurance to determine the worst case scenarios for the product, and focus attention on those.
Exploratory testing - as evangelized by James Bach and others - suggests that these risks will be revealed not when testing endlessly repeats pre-defined tests, but when testers move off the beaten path and exercise product functionality in new ways.
Neither of these two approaches claim to eradicate issues of application quality, which most likely will continue as long as software coding involves human beings. Rather, they attempt to find and address the most severe and unexpected software faults. Together with automation of the more routine tests, these techniques form the basis for higher quality application software.