Featured Post

The great debacle of healthcare.gov

This is the first time in history when the president of the United States of America, or probably for any head of state around the world,...

Thursday, May 29, 2014

Alternative strategies of Software Testing

Nobody denies the fact that every software should be tested and the best option to make sure that is to test all the features of it. That's no brainer, and for that one doesn't need to be expert in QA or software expert. But, often, the reality on the ground may not allow the QA team to go through that "best" route to make the software "bug free". So, what are the alternatives that we have to release a software with an optimal level of testing, when the reality doesn't let you to execute the complete test suite? I have broadly touched that in another post how much testing is enough for software with few tips on how to justify the QA test strategy to perform partial testing. Let's take a look into it from different perspective to derive some kind of strategy that would allow QA team to react at the time of necessity.

There could be number of scenarios that would demand to react differently. However, I would like to focus on three test strategies that I found helpful based on my experience and also probably cover much of the testing ground:

Risk Weighted Testing

This testing strategy actually goes back to the basic question on why in first place we need a QA team and what's the purpose of testing. To me, the QA process exists in Systems Development Life Cycle (SDLC) to reduce the risk of slipping bugs into the production environment i.e. to the end users. I know this would immediate stir up controversy but there's a reason I am saying that. You ask the question to any software professional (including QA professional) that if they can proclaim a software is "completely" bug free after the QA release. I can guarantee you that nobody would dare to claim. The reason is, QA process makes a software relatively bug free and make it usable to an extent that is paid for by the end users. Having said all of that, the best strategy, at the time of time and cost pressure, to effectively conduct the testing would be go with Risk Weighted Testing (RWT) approach. In this strategy, the risk associated with each functionalities would be calculated in number of dimensions and rank them to create an ordered list. That ranked list would be used in pragmatic fashion depending on the budget, time and resource constraints. If the time is the constraint and only few days are given to the QA team, then QA team should start from the top ranked functionality and go through the list sequentially. On the other hand, if QA team is asked to take as minimum functionalities as possible to test, they should draw a line where it would cover the most risk with least resource (can use Pareto Principle)

The primary challenge of this strategy is to create the risk weighted list of functionalities. There are several techniques and tools can be used and here I'm explaining a few of them:

Brainstorming: the most efficient and cost effective method is to have software developers, business analysts, testers, and end users in a conference room and brainstorm the list. People often misuse the term Brainstorming by just having a controlled discussion but if brainstorming is done properly, it can create dramatic result. 

Delphi method: if you have access to group of experts, you can use Delphi method where the experts can take each functionality of the software and risk weight them through a rigorous round of refinement. This is comparatively costly than Brainstorming

Analogy technique: using the past experience within the same company, the potential risks of functionality can be identified. This is the least expensive but often most effective if there are high number of reference projects available with expert personnel to make that analogy.

There also exist quantitative approaches like Monte-Carlo simulation, PERT distributions, etc. to use to create the risk ranked list if appropriate resources are available

Change Oriented Testing

In one of my project, we had introduced a bug into a perfectly working functionality while implementing an enhancement by modifying a common piece of code. Apparently an unrelated functionality, though it proved not to be so unrelated,  got broken due to that common code change which was done for right purpose i.e. to achieve code reusability and with right intention. Unfortunately, the bug introduced due to that enhancement was slipped through all the way to production. This provoked me to think on the existing testing strategy which proved to be insufficient to provide a good solid test coverage (and obviously everyone takes their best shot to squeeze QA cycle as much as they could do). To avoid that kind of situation, this Change Oriented Test (COT) strategy where Development team and QA team would work hand in hand to identify what piece of code or component are changed in the software with the potential impact areas (often labeled as: functionality).  The basic and foundation could be a simple Change-Functionality Impact Matrix that would capture changes as rows and functionalities as columns with X-marked intersection as impact. QA team can't develop this matrix alone by themselves but with the help of Development team a fairly accurate matrix can be created where it would be possible to identify the impact areas and then subsequently plan for the testing to cover the impacted functionalities.

To make it successful, the below steps should be continued to be in place in the SDLC:

- In every release Development team will create a mapping of code components to requirements and hand over to QA team ahead of testing cycle
- The release note would have one section explaining the changes along with the published impact matrix

The downside of this strategy is that if an automated tool (and embedded in IDE) isn't used (I didn't use any such tool and had done in spreadsheet which proved to be expensive)

Usage Driven Testing

The approach of testing a computer software or systems isn't something novel but what I found is that this isn't used explicitly in the Software Development Life Cycle (SDLC) by the Quality Assurance (QA) team. At least I haven't seen that in any Test Strategy document or Test Plan QA team is specified a concrete plan on studying and analyzing end users behavior and psychology to come up with test plan, test scenario, and test data.

The fundamental idea behind this strategy is to focus on the features or areas that the users would use most. The automatic next question would be - "how you would know before you release the software?". Yes, I don't have crystal ball to forecast that but it may not be impossible to get an idea of it, through -

- Surveying among the business users on their priority ranking of every features of the software. Then draw a line at, for the sake of the discussion, third quantile and test all the features thoroughly that fall within that bucket while leaving the rest to only smoke test kind of

- Another approach could be shadowing users day to day activity to come up with the list of mostly usage. The limitation of that is you can only do that if you're automating existing business process, enhancing existing software, rebuilding a software etc.

- The last, and least preferred with highest risk, option could be use expert judgement and pick your list. But this approach is better than to release software by random testing or half done testing with no strategy


The way, in some, TV shows mention the disclaimer at the beginning of an apparently risky techniques as "Do not try this at home", I would like to reiterate this at conclusion that all of the above testing strategies should be considered only in a situation where the full testing cycle can not be performed and the senior management understands and make the conscious call to go with those one of alternate routes to meet business need e.g. customer need, time to market, beat the competitors, etc.