Nobody denies the fact that every software should be tested and the best option to make sure that is to test all the features of it. That's no brainer, and for that one doesn't need to be expert in QA or software expert. But, often, the reality on the ground may not allow the QA team to go through that "best" route to make the software "bug free". So, what are the alternatives that we have to release a software with an optimal level of testing, when the reality doesn't let you to execute the complete test suite? I have broadly touched that in another post how much testing is enough for software with few tips on how to justify the QA test strategy to perform partial testing. Let's take a look into it from different perspective to derive some kind of strategy that would allow QA team to react at the time of necessity.
There could be number of scenarios that would demand to react differently. However, I would like to focus on three test strategies that I found helpful based on my experience and also probably cover much of the testing ground:
Risk Weighted Testing
This testing strategy actually goes back to the basic question on why in first place we need a QA team and what's the purpose of testing. To me, the QA process exists in Systems Development Life Cycle (SDLC) to reduce the risk of slipping bugs into the production environment i.e. to the end users. I know this would immediate stir up controversy but there's a reason I am saying that. You ask the question to any software professional (including QA professional) that if they can proclaim a software is "completely" bug free after the QA release. I can guarantee you that nobody would dare to claim. The reason is, QA process makes a software relatively bug free and make it usable to an extent that is paid for by the end users. Having said all of that, the best strategy, at the time of time and cost pressure, to effectively conduct the testing would be go with Risk Weighted Testing (RWT) approach. In this strategy, the risk associated with each functionalities would be calculated in number of dimensions and rank them to create an ordered list. That ranked list would be used in pragmatic fashion depending on the budget, time and resource constraints. If the time is the constraint and only few days are given to the QA team, then QA team should start from the top ranked functionality and go through the list sequentially. On the other hand, if QA team is asked to take as minimum functionalities as possible to test, they should draw a line where it would cover the most risk with least resource (can use Pareto Principle)
The primary challenge of this strategy is to create the risk weighted list of functionalities. There are several techniques and tools can be used and here I'm explaining a few of them:
Brainstorming: the most efficient and cost effective method is to have software developers, business analysts, testers, and end users in a conference room and brainstorm the list. People often misuse the term Brainstorming by just having a controlled discussion but if brainstorming is done properly, it can create dramatic result.
Delphi method: if you have access to group of experts, you can use Delphi method where the experts can take each functionality of the software and risk weight them through a rigorous round of refinement. This is comparatively costly than Brainstorming
Analogy technique: using the past experience within the same company, the potential risks of functionality can be identified. This is the least expensive but often most effective if there are high number of reference projects available with expert personnel to make that analogy.
There also exist quantitative approaches like Monte-Carlo simulation, PERT distributions, etc. to use to create the risk ranked list if appropriate resources are available
Change Oriented Testing
In one of my project, we had introduced a bug into a perfectly working functionality while implementing an enhancement by modifying a common piece of code. Apparently an unrelated functionality, though it proved not to be so unrelated, got broken due to that common code change which was done for right purpose i.e. to achieve code reusability and with right intention. Unfortunately, the bug introduced due to that enhancement was slipped through all the way to production. This provoked me to think on the existing testing strategy which proved to be insufficient to provide a good solid test coverage (and obviously everyone takes their best shot to squeeze QA cycle as much as they could do). To avoid that kind of situation, this Change Oriented Test (COT) strategy where Development team and QA team would work hand in hand to identify what piece of code or component are changed in the software with the potential impact areas (often labeled as: functionality). The basic and foundation could be a simple Change-Functionality Impact Matrix that would capture changes as rows and functionalities as columns with X-marked intersection as impact. QA team can't develop this matrix alone by themselves but with the help of Development team a fairly accurate matrix can be created where it would be possible to identify the impact areas and then subsequently plan for the testing to cover the impacted functionalities.
To make it successful, the below steps should be continued to be in place in the SDLC:
- In every release Development team will create a mapping of code components to requirements and hand over to QA team ahead of testing cycle
- The release note would have one section explaining the changes along with the published impact matrix
The downside of this strategy is that if an automated tool (and embedded in IDE) isn't used (I didn't use any such tool and had done in spreadsheet which proved to be expensive)
Usage Driven Testing
The approach of testing a computer software or systems isn't something novel but what I found is that this isn't used explicitly in the Software Development Life Cycle (SDLC) by the Quality Assurance (QA) team. At least I haven't seen that in any Test Strategy document or Test Plan QA team is specified a concrete plan on studying and analyzing end users behavior and psychology to come up with test plan, test scenario, and test data.
The fundamental idea behind this strategy is to focus on the features or areas that the users would use most. The automatic next question would be - "how you would know before you release the software?". Yes, I don't have crystal ball to forecast that but it may not be impossible to get an idea of it, through -
- Surveying among the business users on their priority ranking of every features of the software. Then draw a line at, for the sake of the discussion, third quantile and test all the features thoroughly that fall within that bucket while leaving the rest to only smoke test kind of
- Another approach could be shadowing users day to day activity to come up with the list of mostly usage. The limitation of that is you can only do that if you're automating existing business process, enhancing existing software, rebuilding a software etc.
- The last, and least preferred with highest risk, option could be use expert judgement and pick your list. But this approach is better than to release software by random testing or half done testing with no strategy
- Another approach could be shadowing users day to day activity to come up with the list of mostly usage. The limitation of that is you can only do that if you're automating existing business process, enhancing existing software, rebuilding a software etc.
- The last, and least preferred with highest risk, option could be use expert judgement and pick your list. But this approach is better than to release software by random testing or half done testing with no strategy
The way, in some, TV shows mention the disclaimer at the beginning of an apparently risky techniques as "Do not try this at home", I would like to reiterate this at conclusion that all of the above testing strategies should be considered only in a situation where the full testing cycle can not be performed and the senior management understands and make the conscious call to go with those one of alternate routes to meet business need e.g. customer need, time to market, beat the competitors, etc.
5 comments:
These are definitely the practical knowledge of STLC. All of these are essential. How you manage your testing life cycle during time resource constrain is more important thing and some of these tools/techniques are explained here.
I would recommend all these should be included on full test cycle as well. This is why Test Plan is the most important document that describes/gives the overview of the testing areas at a glance.
Good writing, keep it up.
It's an interesting thought to incorporate these alternate strategies into the software testing cycle (liked the term STLC). I am all for it but worried if that would add extra cost to the overall project if you decide to go for full testing rather than alternative strategies.
Not so related to to this post but I always have different opinion on what is Test plan and what constitutes a Test Strategy. Without going much into detail of those, I think a Test strategy should be prepared before a Test plan is prepared (I know you would have strong opposition of it and probably IEEE would also disagree with my point of view). I see the strategy as a overall guiding principle of testing and multiple plan would be created to implement that strategy/strategies. In technical term, Strategy and Plan has multiplicity relationship where one Test Strategy would have one or more Test Plans. Anyway, I wouldn't push the world to change it's view but somehow I can not accept the notion that a Test Plan would have Test Strategies (doesn't fit into my natural thought process)
MY thoughts, may differ (hopefully will not contradicts) with industry standards:
Here is a Link for a formal Test Plan format that i wrote on my blog using the IEEE as guide line. I deviated a little bit from IEEE. I have added few things (for example User Scenario Testing) and few other things.
I would like to summaries it this way- Test Plan is what we are going to do during the STLC of THAT specific product at the given time.
Test Strategy is a document of ideas. It could contains different ideas. Whether any of these applicable or not will depend on product type/ nature and will be mentioned in the test plan which type of testing we will apply and when.
I'm NOT an expert/guru. These are some of my thoughts. And i welcome ideas that does not contradict with industry standards.
This is the beauty of human intelligence. Though we're reading the same thing and working in the same industry, we could develop a radically different perception or wisdom on a similar concept.
Though occasionally I'm a fan of standardization but often it's found that standardization negatively influences creativity. I would probably suggest people to use your test plan template while keep questioning the "standards" if they're relevant to the current needs or has the world evolved from the point of standardization.
Post a Comment