Featured Post

The great debacle of healthcare.gov

This is the first time in history when the president of the United States of America, or probably for any head of state around the world,...

Tuesday, July 30, 2013

How much testing is enough for a software?

This is a million dollar question, especially when you're in a release crunch time and you found out that you don't have automated test suites for your, technically speaking , System Under Test (SUT). So, how would you define the testing boundary of a software? Like most other questions in this world, the answer is similar - it depends. It depends on what you're testing, why you're testing and most importantly how you're testing it- manually or in automated fashion. For the remainder of this post let me clarify what I believe is the goal of software testing which is to certify the behaviors of a software by labeling them if they work or they aren't; releasing it to the end user is a whole different ball game.

Let's first take the most important factor i.e. How you're testing. If you happen to have the fully automated test cases and test suits, my immediate response would be - Run 'em all. This is the safest and most cost efficient way to certify the behavior of a software. Like the Microsoft way, for MS Windows they execute the entire test suits and build it every night. If you can afford it do it, why take the chance. In Statistical world, we take sample because getting the entire population of data is unrealistic. Similarly if you don't have that luxury then you pick and choose based on the reality and expectations set for the software. I've explained it later part of this post.

Now the next factor is, when you're testing with the assumption that you don't have automated test coverage (or at least the time frame doesn't allow you to run the full automated test suits) and you have to test the software manually to certify it. If you are under pressure to complete the testing within a specified time frame then my suggestion is - go for targeted test. To determine what and how much to test, follow the Goldilocks' principle - don't over test it nor under test it, test which is JUST RIGHT. You'll always find that you can cover 80% of the software's feature by executing just around 20% of the test cases that you have and you would spend remaining 80% of your resources to cover the rest; check the famous 80-20 rule if you don't take my words - http://en.wikipedia.org/wiki/Pareto_principle. So identify that 20% test cases and run them and if you're asked to release your software before you are able to open the remaining test cases, go ahead and release the software. One important thing to remember, make sure you release it with a confidence factor attached with it - so for instance when you run 80% test cases of it, label it as "QA certified with 80% confidence". I know you can't release it to the external world with that tag but nonetheless you should have that number tagged with it to communicate the risk factor to the management. And most importantly, to cover your back.

The last but definitely not the least factor is- what you're testing. If you're testing the NASA's space flight module, you have no choice but to test it FULL and in that case you're certainly not told to test it by a certain time frame, but the release of it i.e. the mission launch date would be determined by when you're able complete 100% of the testing. Same is true when you're talking about medical equipment software or life supporting systems. But when you're testing a non-mission critical systems and missing a bug in the software won't take the company down (I remember once I logged in to Facebook and I was taken to profile of
some one else while I could see certain photo album of mine within that profile; which happened in around 2010 and now in 2013, Facebook turned out as the de-facto platform for social networking - that bug was surely missed out by some test engineer but who did care about that) and you're given a deadline to meet, go ahead boldly with confidence based testing. One more thing about test coverage, there are some cases where you want to test both positive and negative test cases for a feature and there are cases where you're just fine of running only positive test cases (this is applicable for the software that are built to use in house and won't ever go out of the corporate intranet boundary)

The bottom line is, you always want to create every possible test cases for every features and run them all but you need to act based on the reality on the ground and don't hesitate to take calculated risks.

2 comments:

Shahidul Mahfuz said...

Agreed on most of the cases. Would like to express few of my observations from my experience:
(Yes, "It depends" part is always there.)

What i have experienced so far is Company is always in rush to release a product (software / firmware) and QA usually is given small amount of time to test thoroughly (even though testing starts from product design phase).

So, meeting with the demand, the best case scenario comes as "Eliminates show stoppers, eliminate safety issues, record all the issues found during the testing and prioritize them P1 to P5 based on merit, fix up to P3. Leave rest for next release".

Thus, when the required level (?) of confidence has been achieved, i personally think, its ready for release with "Defects".

I would like to mention one thing by Dr James Whittaker - former Test Engineering Director's, once said, "There will be bugs (period). Its about how fast we can fix and release them."

Mohammad Masud said...

It is unfortunate but harsh reality is that QA is always the space that everyone in the project management loves to squeeze. I personally believe that QA should get at least half the time, or may be more, of the development time - the real construction of the software. But reality doesn't always allow that. So what I think the solution of this problem is to do real iterative development builds scheduled to overlap close to 80% with the development cycle which isn't easy to plan but can be achieved if QA and Development team is in synergy. It's like an orchestrating a music where everyone has to play their part to its near perfect level. It's not impossible but need high level of delicate planning.

About your releasing with defects comment, I absolutely concur with your thought and only thing the project management has to decide on what you put as the question mark. If everyone is in the agreement on that number, there's no reason to shy away from releasing it with the magical section of the release note - "Known Issues".

And finally, can't agree anymore that there will be bugs. When the Test Engineering director has the gut to declare that to the world that shows that how much confidence he has about his QA team and process. Only one small thing I would like to add, if I may, that there will be bugs (may be known or may be unknown) outside the peripheral of test boundaries but if there's any within the boundary of the test coverage, it better be a known bug