Featured Post

The great debacle of healthcare.gov

This is the first time in history when the president of the United States of America, or probably for any head of state around the world,...

Showing posts with label Quality assurance. Show all posts
Showing posts with label Quality assurance. Show all posts

Tuesday, July 30, 2013

How much testing is enough for a software?

This is a million dollar question, especially when you're in a release crunch time and you found out that you don't have automated test suites for your, technically speaking , System Under Test (SUT). So, how would you define the testing boundary of a software? Like most other questions in this world, the answer is similar - it depends. It depends on what you're testing, why you're testing and most importantly how you're testing it- manually or in automated fashion. For the remainder of this post let me clarify what I believe is the goal of software testing which is to certify the behaviors of a software by labeling them if they work or they aren't; releasing it to the end user is a whole different ball game.

Let's first take the most important factor i.e. How you're testing. If you happen to have the fully automated test cases and test suits, my immediate response would be - Run 'em all. This is the safest and most cost efficient way to certify the behavior of a software. Like the Microsoft way, for MS Windows they execute the entire test suits and build it every night. If you can afford it do it, why take the chance. In Statistical world, we take sample because getting the entire population of data is unrealistic. Similarly if you don't have that luxury then you pick and choose based on the reality and expectations set for the software. I've explained it later part of this post.

Now the next factor is, when you're testing with the assumption that you don't have automated test coverage (or at least the time frame doesn't allow you to run the full automated test suits) and you have to test the software manually to certify it. If you are under pressure to complete the testing within a specified time frame then my suggestion is - go for targeted test. To determine what and how much to test, follow the Goldilocks' principle - don't over test it nor under test it, test which is JUST RIGHT. You'll always find that you can cover 80% of the software's feature by executing just around 20% of the test cases that you have and you would spend remaining 80% of your resources to cover the rest; check the famous 80-20 rule if you don't take my words - http://en.wikipedia.org/wiki/Pareto_principle. So identify that 20% test cases and run them and if you're asked to release your software before you are able to open the remaining test cases, go ahead and release the software. One important thing to remember, make sure you release it with a confidence factor attached with it - so for instance when you run 80% test cases of it, label it as "QA certified with 80% confidence". I know you can't release it to the external world with that tag but nonetheless you should have that number tagged with it to communicate the risk factor to the management. And most importantly, to cover your back.

The last but definitely not the least factor is- what you're testing. If you're testing the NASA's space flight module, you have no choice but to test it FULL and in that case you're certainly not told to test it by a certain time frame, but the release of it i.e. the mission launch date would be determined by when you're able complete 100% of the testing. Same is true when you're talking about medical equipment software or life supporting systems. But when you're testing a non-mission critical systems and missing a bug in the software won't take the company down (I remember once I logged in to Facebook and I was taken to profile of
some one else while I could see certain photo album of mine within that profile; which happened in around 2010 and now in 2013, Facebook turned out as the de-facto platform for social networking - that bug was surely missed out by some test engineer but who did care about that) and you're given a deadline to meet, go ahead boldly with confidence based testing. One more thing about test coverage, there are some cases where you want to test both positive and negative test cases for a feature and there are cases where you're just fine of running only positive test cases (this is applicable for the software that are built to use in house and won't ever go out of the corporate intranet boundary)

The bottom line is, you always want to create every possible test cases for every features and run them all but you need to act based on the reality on the ground and don't hesitate to take calculated risks.