Featured Post

The great debacle of healthcare.gov

This is the first time in history when the president of the United States of America, or probably for any head of state around the world,...

Sunday, September 30, 2012

Software Confidence Index

[Finally I've materialized the concept that I had started in 2009 to measure the confidence index of a software or application. The initial idea of this indexing was mentioned in my another post in 2009 Project Confidence Level and that I've refined it through these last few years to finalize it to an Indexing scheme (I was influenced by the concept of Consumer confidence Index and partly by Credit Scoring scheme as I mentioned in my previous post)]

At the end of a successful software release, the members of the project team get some breathing space to reflect on the completed project and ask some questions specially surrounding the success of the project or lesson learned from it; as we don't like the word “mistake” to label our shortcomings so we sugar coat this as “lesson learned”. These questions are absolutely important to understand the level of success of any project but the irony is that, most of the time, the way these questions are answered are very vague in nature and almost impossible to quantify the success of that project.

Imagine a large conference room, full of hard core technical people, the managers and potentially the sponsors or business users of the delivered software, of the project completion meeting (there are so many fancy names of this kind of meetings - lesson learned, retrospective, project closure etc.) and everyone is eager to know how well or not so well was done. The first question usually is very simple which is most of the time asked by the Project Manager or Technical Manager of the project and that is “how have we done in this project?”. Though the question is so simple but the drama starts when people of different areas start to respond to this apparently simple question. 
The range of answers could be -
  • "we've done a great job. The number of defect has gone down greatly …” - the person is most probably the technical lead of the project
  • "It was good. The developers had written comparatively more codes in this release (along with LOC, if available)" - the person probably is a senior developer or technical lead
  • "The quality of business requirement was better; the developer didn't have much complain about requirement issues in this release" - this answer is certainly from a business systems analyst
  • Now the business sponsors or users weigh in - “The software greatly simplifies our day to day work and improves the productivity of our employees” or “the performance of the software is not what we expected so doesn’t add much values in our business process”
  • “The throughput of the team has been increased significantly and we were able to deliver more software features compare to our previous release” - this one is probably from the Technical Manager of the project
  • And these go on and on ...
There is nothing wrong with above all the answers and they all are right in the perspective of their own business area. But the problem is how do you communicate success or failure to different group of people let’s say the senior management in the company, do you think that they’ve enough time or interest to listen to all the answers of number of lines written, the defect count, application performance or throughput of the development team? And moreover, how do you compare this result with your next or previous project or different software project in your company and in the industry? So, even though the above kind of answers tell us the success or failure of the software but it’s not easy to communicate to everyone across different interest groups, not comparable across different software projects and certainly does not give a single measure point of success or failure that can be projected on a trend graph unambiguously. 

Wouldn’t it be simpler if we have a straightforward way of evaluating the success or failure or comparing with other releases of projects which would be unambiguous to everyone and most importantly, measurable and quantified in a single number or index? My goal is to debunk the myth that software development is something that you can’t compare one with another, even if the software is developed by the same team of programmers, because every software is a creative and an unique piece of work that make them incomparable. Let’s see if all the dimensions (e.g. quality, productivity, reliability, performance, business value, end user satisfaction etc.) of success of a software project can be combined in to an index to measure them; let's call it Software Confidence Index (SCI). After all, we develop software systems which follows a predictive path of execution and can be measured in number so why not the success of the software that we’ve delivered.

So now let’s change the answers that we had seen above in the conference room to a little different way and more unambiguous way:

  • "The project was great! The SCI was 85 in this release", or
  • "The release went somewhat good. The SCI was 75, we've room to improve in the field ...", or
  • "The SCI was 55. It didn't meet our expected goal. Lets talk about what went wrong ..." etc.
And if people have interest in specific facts like defect density or software performance, that can be dug down in that meeting or in a separate meeting with specific interest groups. The advantage of quantifying the success of the software through an index certainly is that it creates a new vocabulary to communicate across the board but on top of that, it can be used as a new goal setting parameter for the entire team involved in the development and implementation of the software.

So now the question would be how this SCI could be computed. The SCI will cover both sides of the aisle of the software product, the technical side and business side. I would like to give equal weight to technical and business factors but this may change depending on the project's importentce on the techincal or business priorities, as an example, for a software company technical factors may get higher weightage whereas for a non-software organizations may focus more on business values rather than technical excellence. This should be determined by the organization's goals and priorities. There is no hard list of facts that should be considered to compute the index but it has to be consistent across the organization so that the comparison among the softwares SCI would make sense. Let’s check out how it can be computed.

The index considers two kind of the factors - Technical factors and Business Factors. The Technical Factors includes such as - code quality (duplicate code, unused variables etc.), defect density, effective code coverage for unit tests, use of design documents and processes, use of standard coding practices (vulnerabilities in the code for security, memory leak etc.), benchmark of load test result etc.

  • Each factors have threshold values to provide a point. E.g. if the defect density is less than 2 in 1,000 Lines of Code (LOC) then the point is 5 and if it’s greater than 20 but less than 25 then the point is 1. The point between 1 and 5 is to cover the in between values.
  • Each factors have associated weights. The sum of weights should be 10.

The Business factors is a questionnaire and sent to the customers (the end users) as a survey. The sample questions in the survey could be:

  • Does the application have all the features delivered those were committed to business?
  • The application saves valuable time and simplifies end users day to day job
  • It is very easy and intuitive to use the features of the software
  • How satisfied are you with the performance of the software?
  • How satisfied are you with the response of the IT Team to any problem experienced in the software?
  • Overall how satisfied are you with the software?
  • How likely are you to recommend this IT Team to others with similar software need?
Each questions have points ranged from 1 to 5 where 5 is Extremely Satisfied or Agree and 1 is Extremely Dissatisfied or Completely Disagree. Similar to the Technical Factors, each business factor questions have weight and the sum of the all weights is 10.For both the Technical Factors and Business Factors, the point would be 0 if the value falls outside of the accepted range.

Once all the factor values are known, then get the Software Confidence Index using the below equation:

           

where T is Technical Factors and B is Business Factors


Here is the snapshot of factors along with their weights and points that are used to calculate the first SCI:





It's a starting point from where the discussion starts moving towards more of a rational and quantitative direction rather than vague and subjective discussion. This index is most effective and useful when the index is captured for a period of time that enables the comparison of historical data. This SCI can be used in many ways e.g. to create benchmark in an organization, setting goal to the software development team, compare the heterogeneous set of software delivered by an organization etc. This isn't a silver bullet to improve the business confidence on the software but it definitely will set the course to improve the business confidence in the developed software.

Friday, June 8, 2012

Follow-up on Project Confidence Level


I had done some modifications on the perspective of the PCL (to see the detail click on PCL) and included the business factors into it. So far it had only considered the technical factors but I came to a realization that to business or the end users it doesn't make any sense of reaching to the highest level of the PCL and not satisfying the business need. I've outlined the end users perspectives into a form of survey questionnaire that would be send out to the end user (i.e. the customer of the software) at the end of the release and then the answers would be taken into consideration to compute the PCL.

I have yet to finalize the weight of the end users feedback into the PCL computation but it can be 50% of the PCL factors.

Example of end user survey questions could be like -

Q- Does the software simplifies your job?
A - 1. Drastically 2. Somewhat 3. Same as before 4. Made worse 5. Made Extremely Difficult

Note: I've presented this idea to my mentor sometime in 2011 (named it as Software Confidence Index) and he had appreciated the idea and encouraged me to continue pursuing it at my work