In Software Development, Quality Matters

For much of the past week, I have been debating software quality and how to measure it with one of my clients. Twenty years ago, when I managed software development for a living, I had three quality “metrics” for my engineering teams: delivered defect density; usability; maintainability.

Each was based on several different measures—for example we differentiated between defects that were coding errors and defects that were incorrectly implemented features—and not every metric was available all the time. You don’t get to measure maintainability until you have been doing maintenance for a while.

For each metric I had defined target levels (no Level 1 critical code defects for example) and improvement targets. My team leaders all knew what these measures were and that they would be assessed on how well they did against the targets.

At the final engineering review prior to release to production, we would walk through all the measures, look at both the history and final status of the quality process and get a final sign-off from engineering, customers and the production support team.

Miss one of the critical targets and your code would not make it through the review. This approach served me pretty well for most of a decade and I started into the conversation with my client thinking that they would work this time as well.

But developing software today is not always that that simple. In addition to coding errors, feature failures and performance requirements, engineers have to worry about things like security, privacy rules, localization factors (including specific use of language and images) and device characteristics (is the user device a phone, a PDA, a TV or a notebook computer?) as well as ease of deployment and supportability. More people need to be involved in upfront design decisions, in test design and in the review of test processes.

I recently saw a checklist with over 70 required “sign-offs” before a software deliverable could be added to the production environment. All called “quality gates.”

Something seems out of balance here. Let’s concede that all these new characteristics are indeed necessary. But are they to do with “software quality,” and if so, how do I measure them? In most cases, I seem to be creating an “acceptability” metric, based on a set of non-functional characteristics that I can perhaps test for on a “pass/fail” or “present/absent” basis. I can design a review process that does this, but it’s not in my mind a “quality” issue.

So we designed a survey to see what the client’s software engineering teams are currently doing to measure quality. The initial snapshot isn’t encouraging. Many “measures” were cited, but there were few quantitative definitions or correlations between the measures being used and the “quality” of the final deliverable.

In fact, the biggest problem we observed is that there is no real definition of “quality” in use. We are going to do some more work on this over the coming months and see if we can get an acceptable definition (or set of definitions) worked out and deployed.

I’ll keep you all posted as the work proceeds, but I’d also be interested in what other groups use for definitions and how they measure against them.

Get the Free Newsletter!

Subscribe to Daily Tech Insider for top news, trends, and analysis.

Latest Articles