Featured Post

The great debacle of healthcare.gov

This is the first time in history when the president of the United States of America, or probably for any head of state around the world,...

Monday, October 19, 2009

Computing confidence level of a software project or iteration

Usually once a Project or an Iteration or a Release finishes up, the corresponding team sits in a review meeting (there are so many names of this kind of meeting e.g. Lesson Learned Meeting, Retrospective Meeting etc.) and tries to identify the mistakes the team had made (we don't like to hear that we've done any mistakes as a natural human tendency, that's why people name it as "Lesson learned"), what are the remedy so that the same mistake doesn't occur in next cycle (but the lessons of history is "Nobody learns from history"). My objective of this blog is not to dig down on those issues but to answer a simple question asked at the beginning or end of the meeting (usually by the manager) and that is "How we've done in this last release?"

There are so many ways of answering this question. Like:
- "we've done a great job. The number of defect has gone down greatly ...
- "It was good. The developers had written comparatively more codes in this release (along with LOC, if available)"
- "The quality of business requirement was better; the developer didn't have much complain about requirement issues in this release".
- And this can goes on.

But shouldn't we have a straight forward way of evaluating the success or failure or comparing with other releases of projects which would be in unambiguous way and most of all, measurable. After all, we develop software systems which works in a predictive path (unless it's AI or robotics project) and can be measured by numbers.

I would rather expect the answer in below format instead of the answers mentioned earlier:
- "The project was great! The Project Confidence Level (PCL) was 4.5 in this release", or
- "The release went somewhat good. The PCL was 2.3, we've room to improve in the field ...", or
- "The PCL was 1.9. It didn't meet our expected goal. Lets talk about what we had done wrong ..." etc.
And through this unambiguous number we can even set a target for the next release that would be easy to communicate to the project team members.

So now, lets talk about what could be a Project Confidence Level (PCL). By PCL we'd be able to associate a number to a development release or a project using the available historical facts gathered for the project or similar projects. Ther are various kinds of facts in a software development project that can be used to compute the PCL. The number of facts would vary depending on the availability of information in a project. The higher number of facts you include in your formula, the more effective your PCL calculation would be. Below are the sample of some facts.

Average LOC/feature
Average LOC/developer in a Development day
Average LOC/Defect
Average LOC/Critical Defect
Average LOC/Medium Defect
Average LOC/Simple Defect
Average Defect/Feature
Average Critical Defect/Feature
Average Medium Defect/Feature
Average Simple Defect/Feature
Average Development Day/feature
Average project Day/Feature
Average Work days/release
Average LOC/release
Average LOC/Developer in a release
Average Feature/Release
Average Defect/Release

Each above mentioned facts would be given it's weight/impact on the development effort and the total would be added up to 100. Consider the each average facts has confidence level as 1. So the average confidence level of the project would be 1 when the following formula is used:
Average PCL = {summation of (each fact * weight)} divided by 100.

And after each release, the above facts would be counted for that release and through a mathematical formula the PCL can be determined. The PCL lower than 1 is unacceptable, > 1 is expected and the higher the PCL is the greater the achievement of the team.

No comments: