Software Metrics
Learning Objectives
- Differentiate the importance of certain software metrics over others in a business context
- Distinguish between metrics that can be quantified, automated and metrics that are qualitative
- Define metric and measurement
Description
Determining software quality in an objective, quantifiable way is the primary goal of software metrics. Some metrics, like Quality of Comments for example, are difficult to automate and require manual review. Others, like Lines of Code can be discovered by using a variety of tools designed to automate the process. Though the intent behind representing quality as a quantitative value is to demonstrate a level of objectivity, the meaning behind the numbers are often considered subjective, or relative to a particular context such as a development team or a portfolio of products. Some measurements of software quality require both quantitative and qualitative metrics which can produce a more holistic perspective. Characteristics of software quality are expressed as measurements of extensibility, maintainability, portability, reusability, reliability, scalability, security and testability.
Key Terms
Metric: “…[a] system of measurement through which the merits of an entity can be assessed.” [footnote]Venters, Colin, Lydia Lau, Michael Griffiths, Violeta Holmes, Rupert Ward, Caroline Jay, Charlie Dibsdale, and Jie Xu. “The Blind Men and the Elephant: Towards an Empirical Evaluation Framework for Software Sustainability.” Journal of Open Research Software 2, no. 1 (July 9, 2014): e8. https://doi.org/10.5334/jors.ao.[/footnote]
Measure: “…may contribute to a metric as a set of quantitative values within the system.” (Venters et al)
The difference between low and high quality software has implications for project cost, organizational risk, growth potential and can effect decisions about developer attention/retention, product direction and human resource management. Without quantitative metrics and objectivity as its goal, quality can only be expressed as sentiment, or personal (expert) opinion. The desire for assurance from something other than individual perception is what should drive interest in software metrics.
Much of the preliminary work to obtain this assurance involves finding the right question to answer. Next is discovering the right tool for your code base; one that is capable of generating the right metric and can be correlated with a characteristic of quality which brings you closer to your organizational goal.
Exercise 1 – Correlation
Look through the table at the following website and note the varying degrees of correlation between certain metrics and code quality. Not all metrics are created equal! Observe that some metrics can be automated and others rely on manual intervention. Keeping in mind all the different types of code repositories these assertions of correlation could be applied to, is it more accurate to think of these as objective correlational measures or subjective correlational measures? Why?
Modifiability | very high correlation | partially automable | dependency metrics |
Modifiability | very high correlation | partially automable | cohesion metrics |
Testability | very high correlation | fully automable | use-case scenario metrics |
Understandability | high correlation | fully automable | method/class length metric |
Understandability | fair correlation | partially automable | presence of worthwhile comments = ratio comments/source code |
Metrics can, and should, be used throughout the entire software development lifecycle. They can also be especially useful in software that is acquired. Open source products, through licensing, infrastructure and intent are designed, via dependencies, libraries or architecture, to use other open source products. Software metrics can be a quick way to assess risk associated with creating a dependency on an unknown code base, and community support around that code base.
Software metric tools can be integrated in numerous ways depending on need. For instance, command line tools can be invoked as part of a continuous integration process or used manually to obtain a quick summary of text-based analysis on a locally hosted repository. Software analytics platforms on the other hand, offer a full suite of metrics and accompanying visualizations. For developers and more advanced users, metrics can also be built into custom applications.
Importance
Metrics may look very different between organizations. What is useful tends to be subjective since it depends on contextual things like organizational goals and individual perceptions of how to achieve those goals. Knowing that there are a variety of things that can be measured means that organizations must find the right questions and map it to the appropriate metric. For instance some metrics can be correlated with a certain degree of software quality and other metrics focus on the health of the open source community around the software. Models of software quality specific to the project at hand is one way to match metrics to goals.
Exercise 2 – Enumerating Software Metrics
Look through the following web resources and note the different ways that software quality can find expression; sustainability, maintainability, community health. If you’re new to metrics, you’ll notice the scope is quite broad. To become more familiar with this breadth and using the example below as a starting point, enumerate a list that maps metrics to what they measure.
- Examples of Software Metrics
- Software Maintainability
- Software Sustainability
- Software Community Health
Example:
Metric | Description | Measurement |
LOC | Lines of Code (total) | Size |
SLOC | Source Lines of code | Size |
LLOC | Logical Lines of code | Size |
MLOC | Million Lines of code | Size |
Cyclomatic Complexity | The number of linearly independent paths through the code | Complexity |
Confidence in software quality can be confirmed, to a certain degree, with carefully chosen metrics or a combination of metrics. Being assured that software will do what it is intended to do is a desirable outcome and a justification for investing time in software metrics. Though it may be tempting to reduce software quality to a set of automated numbers, qualitative assessment can sometimes compliment quantitative assessment by informing a more holistic perspective.
Example 1 – Maintainability Matrix
Table 3 on page 7 of the following Technical Document is a Maintainability score matrix. This maintainability score is measured by calculating the sum of five weighted metrics.
Data analysis requires interpretation. Regardless of whether data is collected automatically, manually or represented visually in a graph, a chart or in numerical format, metrics are useful if they tell a story. Ultimately analysis intends to create plausible narratives backed up by reasonable interpretations of data. When a story is compelling and verifiable with quantitative evidence it may even inspire change. It is necessary to interpret what the numbers mean in a specific context and where the numbers don’t make sense, to find out why.
Example 2: Story Telling
The following data visualization represents the software metric Lines of Code applied to 7 releases of one code base over a period of 6 months (March – September) using the open source, command line tool cloc (Counting Lines of Code). From this visualization derived from the output of cloc, we can see the frequency and distribution of releases, the number of lines of code and the language breakdown of each release.
Questions:
- What language represents the most amount of code?
- What can be inferred from the distribution of releases?
- What can be inferred from the frequency of releases?
- What aspect(s) of software quality can be derived from the lines of code metric?
Interpretation:
A moderate correlation to maintainability (a characteristic of software quality) can be inferred from the number of lines of code, though in the absence of any objective number (i.e. Less than 20,000 = good, More than 20,000 = bad) it is best to look at that metric over time, and understand it as a proxy for one aspect of maintainability. Seeing this number dramatically increase between releases may draw attention and then set an expectation that more bugs are likely to occur. The existence of Groovy and Cucumber would indicate that automated testing is included in this code base, which speaks to some degree of testability (another characteristic of software quality).
The enumerated list of languages with Javascript at the top and the existence of Pug would infer that this is primarily a Node.js app. The existence of Dockerfiles, YAML, Python and Shell could be interpreted to mean this adopts a containerized, microservice architecture. The existence of JSX files means this code base is likely to have a React.js front-end. As the number of languages represented in the code base (16) increases, it may negatively effect the overall maintainability.
The tight distribution of releases around August 2019 and subsequent releases, each with nominal increases in number of lines of code could coincide with an initial release to production followed by quick, successive minor patches to fix backwards compatible bugs discovered once actual users started using the software.
Scenario
Imagine you are a product owner with a new job! First of all, congratulations. Part of your new job is to ensure that the portfolio of products you are responsible for are high quality. Three explicit organizational goals direct your focus: maintainability, security and scalability. You must be confident that your development team focuses on minimizing security vulnerabilities, ensuring that the products can scale up and are developed in a way that they can be maintained moving forward.
- What metric or combination of metrics would you recommend in order to provide assurance that at least one of these organizational goals is being monitored?
- Would you recommend automating these metrics, or performing them manually?
- How often would you update these metrics to check in on progress?
- How would you report out on their progress?
- What strategy can you employ to help ensure the metrics have a high correlation with quality?
Additional Resources
- Ronchieri, E., and M. Canaparo. “Metrics for Software Reliability: A Systematic Mapping Study.” Journal of Integrated Design & Process Science 22, no. 2 (April 2018): 5–25. https://doi.org/10.3233/jid-2018-0008.
- Woody, Carol, Robert Ellison, and Charlie Ryan. “Exploring the Use of Metrics for Software Assurance,” n.d., 69.