Software Quality Insights

Mar 8 2012   4:50PM GMT

Metrics for measuring quality

Yvette Francino Yvette Francino Profile: Yvette Francino

Recently, SSQ created a quality metrics guide which includes a series of articles, tips and stories related to measuring software quality. It’s a complicated and controversial topic with no easy answers. We asked our readers to weigh in, and I wanted to share a couple of insightful responses we received.

Darek Malinowski, an ALM Solutions Architect at Hewlett-Packard, writes:

In my opinion we should start from business requirements. First we should estimate the weight (criticality) or business impact of each requirement. Business impact is related to loss of money, loss of confidence, loss of market etc. So the best to measure quality is requirements coverage by tests vs. requirements criticality. This can tell if the product we deliver to the business provides all needed functionality as required.

Haya Rubinstein works as the Quality and Delivery team lead in SAP in the IMS NetWeaver MDM group. After reading Crystal Bedell’s article, What upper management should know about managing software testing processes, she gave a very detailed answer to our question about how her organization measured software quality.

She says:

In my experience, management is interested in the test KPI’s even before the release is decided on. There is usually a quality standard that they wish to adhere to. 

Following is an example of KPI’s for a software release: 

  • 0% defects on the top ten priority functionalities.
  • Less than 5% degradation in performance.
  • All new features have been tested.
  • All new “hard” software requirements were met.
  • All quality standards were met or mitigations were agreed on with quality standard owners.
  • Over 90% of tests on new features have passed.
  • Over 95% of all unit tests have passed.
  • Over 95% of all regression tests have passed.
  • Over 95% of all quality standard tests have passed.
  • No Very High or High priority defects remain open.
  • All medium priority defects have been examined by development to ensure they are not a symptom of a more severe issue.

Rubinstein goes on to  describe the detailed and disciplined process that is used throughout the development life-cycle with test milestones starting with the planning of the feature through it’s delivery. She says that at the end of the test cycle, management receives weekly reports aimed at showing compliance to the KPIs including:

  1. If there are any risks to KPI compliance they are presented together with the mitigations proposed.
  2. The compliance to the KPI’s above with the current status (number or % and color coded)
  3. Details of open Very High/High priority defects –Defect #, Summary of issue, Created by, Opened on date, Assigned to, Current status
  4. Approval status for the relevant features according to hard software requirements and details of remaining defects that are still open on each of the features.
  5. Link to full defect report.
  6. Link to full test plan report.
  7. Notes exist for all open defects.
  8. An overall status (green, yellow, or red).

What about your organization? What metrics do you use to measure quality?

2  Comments on this Post

 
There was an error processing your information. Please try again later.
Thanks. We'll let you know when a new response is added.
Send me notifications when other members comment.

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
  • Lucasistheman
    Hello Yvette, interesting insight on how companies such as HP and SAP tackle the "problem" of software quality. And thank you for the SSQ Quality Metrics Guide. As you say, software quality is a controversial topic. Which is good, because it leaves room for discussion and at the end of the day one can make the decisions to measure quality based on what one thinks is best for his or her particular case. After reading I can't help mentioning the [A href="http://blog.optimyth.com/2012/03/the-10-commandments-of-software-quality"]Optimyth's 4th commandment of software quality[/A]: [B]IV. Testing alone is not enough.[/B] Yeah! we've heard this before. But it's true, and little is said about other things you can effectively do to improve your software's quality. The 10 commandments article mentioned above can give you some ideas. Nevertheless, I would like to mention that I find a little simplistic to relate quality to absence of defects ONLY. And that the KPIs are only defined as functions of the number of defects, their priority, density, etc. combined with the test coverage of the code. Those can give you measures of the functionality, usability and even performance quality aspects of your software, depending on the kind of defects you are looking for with your tests. However, there are other fundamental aspects that are worth measuring like maintainability, efficiency and reliability. Nothing new here either, these and the ones above, are other aspects defined by the ISO 9126 and the subsequent ISO 25000 standards, as characteristics to measure the quality of a software product. I consider a good practice to measure as many of these aspects as possible from the beginning of a development project, including the definition and requirements phase (in traditional methodologies) or in all iterations, sprints, you name it (in agile methodologies). Why? 1. It is a defined standard backed by regulator bodies and the industry. 2. The concepts are easily understood by anyone in the organization helping you align better with management and business. 3. Measuring them will give you control on the delivered software. 4. Controlling them can save you a lot of money in development and future maintenance helping you cheer up your managers. 5. Happy users, helping business growth. Now the standards tell you what to do, not how to do it. This gets interesting. Here you have to apply several technics to gather information: testing (functional, performance and load), static code analysis, compliance to coding rules and best practices, intrinsic code metrics, trace requirements, project data, etc. and combine them to get a reliable measure of the ISO defined aspects. What technics do you use and is the best way to combine them? It is up to you... It is what we call the quality model of your organization. But that is a different story. J.
    20 pointsBadges:
    report
  • Yvette Francino
    Lucasistheman, Thank you so much for your well thought-out and articulate response! Sadly, we don't get many serious comments on these blog posts, and when we do get one, such as yours, it's a real treat! You bring up some good points, though unfortunately, many organizations do not have the resources to measure as thoroughly as you suggest and so they try and narrow down to the KPIs that will give them the most bang for their buck. It's also difficult, of course, for measurements to accurately reflect quality when factors are continually changing. However, your points are all well-taken. I'm going to put Optimyth on my list of blogs to read. Thanks for the pointer to the article and thank you, again, for your insightful comment.
    900 pointsBadges:
    report

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: