Measure Delivery Performance with Business Metrics

Software development productivity is notoriously hard to measure. At the same time, it’s important to visualize the business impact of any technical debt that CodeScene detects or to measure the effects of changed ways of working in a business context. For that purpose, CodeScene includes a Delivery Performance module that adds business metrics on top of our engineering metrics.

CodeScene puts its technical and organizational metrics into a business context.

Fig. 139 CodeScene puts its technical and organizational metrics into a business context.

Putting Engineering Metrics into a Business Context

The Delivery Performance module puts CodeScene’s technical and organizational metrics into a business context. This CodeScene feature addresses three important use cases:

  • Put code metrics into a business context: To a non-technical stakeholder, terms like code complexity and technical debt are opaque and non-tangible. By showing the business impact of a declining code health trend, the engineering organization help non-technical stakeholders balance the trade-off between long- and short-term goals.

    Example: Can we continue to churn out new features, or is it time to take a step back and invest in improved code quality?

  • Bridge the communication gap between engineering and business: Despite an increased focus on cross-functional collaboration and powerful organizational strategies like DevOps, most software projects still face a communication gap that often results in missed deadlines and low efficiency. The key to sucessful delivery is to ensure that all stakeholders – both engineering and business – have the same view of what the system looks like and how healthy it is. We need situational awareness.

    Example: Developers know if we take on excess technical debt, but as a development team rarely own their time. Business do. With the Delivery Performance metrics, we can shift the discussion from the technical impact (“oh, our code has declined in health”) to discuss the business impact: “our time to market has increased with 20%, and a likely reason is the measureable decline in code health”.

  • Feedback on technical and organizational change: It’s valuable to illustrate the effects of larger refactorings. The engineering organization gets immediate feedback by Integrate CodeScene with Pull Requests, and using the Delivery Performance, management can now get the same feedback from a business perspective.

    Example: A critical feature is re-designed as a response to increasing maintenance costs. Use the Delivery Performance to show that this investment paid off in terms of less unplanned work (see the graphs above for a real-world example).

  • Increase capacity by optimizing the bottleneck: Most aspects of software development are opaque. As such, it’s way to easy to act on increasing lead times by throwing more people at the problem, taking on excess coordination costs in the process. Acting on CodeScene’s technical and organizational metrics often open a different set of possibilities; what if we paid down the technical debt in the prioritized hotspots? What if we re-aligned our development teams with the way the system evolves? What if we automate the integration tests for our sub-system hotspots? Using the Delivery Performance metrics, you can measure the impact on throughput and cycle times in real time.

    Example: Instead of hiring more consultants to staff the development teams, look to reduce the amount of unplanned work to gain free – and existing – capacity to work on new features.

Pre-Requisites for the Delivery Performance Metrics

The delivery performance metrics require the following data:

  • CodeScene integration with project management tools, see Integrate Costs and Issues into CodeScene.

  • Commit metadata data is used to calculate the scope, batch sizes, and cycle times. As such, each commit has to be tagged with its coresponding issue.

CodeScene supports two different strategies for calculating delivery performance, based on how your release flow works:

  1. Calculate trends based on state transitions captured in a project management tool (e.g. Jira). This is the default option since it works with most organization’s existing workflows. Using this option, the delivery performance data is shown as periodic weekly trends.

The default strategy considers an issue "released" when it transitions to a specific state in a project management tool.

Fig. 147 The default strategy considers an issue “released” when it transitions to a specific state in a project management tool.

  1. Identify releases via Git tags. The second option requires that you tag all releases in Git. The advantage of this approach is that CodeScene can now uniquely identify each release and present the delivery performance data on a per release basis.

A more advanced and precise alternative is to use Git tags to identify releases.

Fig. 148 A more advanced and precise alternative is to use Git tags to identify releases.

Note that the Delivery Performance module is currently in beta status. This means that the presentation is likely to change in future CodeScene releases.

Motivation: Turning a Crisis into Success

CodeScene’s delivery performance module was built to close the gap between the engineering organization and the business side of IT. We consider that crucial; developers are the ones that act on technical debt and quality issues, but the business decides what to focus on and how the organization looks. Making sure that both business and engineering share the same view of an evolving system and have effective feedback loops is paramount to success. We capture an example in the following story.

CodeScene puts its technical and organizational metrics into a business context.

Fig. 149 CodeScene puts its technical and organizational metrics into a business context.

The previous two graphs are from a real-world IT project. The project had gotten a good start, and now the organization decided to scale-up with more developers. However, the existing collaborative strategies were adapted for a small and tight-knit team, and didn’t scale well. As a consequence, the organization soon noticed a set of symptoms indicating deeper problems. The most visible symptom was a surge in the number of support issues. Maybe that could be resolved by hiring more testers or even expand with a first line support?

Unfortunately the problems soon went deeper. After a few fatal sprints where little progress was made, the organization decided to hire even more developers since it had to bring out long awaited features on the market. Fortunately, before making that choice, a root cause analysis was performed using an earlier version of CodeScene. The outcomes were:

  • The additional engineering people they wanted to hire were already on-board. They were just busy reacting to unplanned work due to an increase in the defect rate.

  • The high amount of unplanned work also meant that the most experienced people were busy doing critical ad-hoc bug fixes rather that driving the product forward or supporting the new team members. This explained why it had become so hard to plan for new features too.

  • The overall throughput declined. Not only do we see a shift towards more unplanned work in the graphs above; they also show that the organization delivers less and less. Part of this efficiency loss was due to the constant context switches required to act on critical support issues. Another explanation was the increased coordination and the declining code health visible in the graph.

  • Speaking of code, a likely explanation for the negative trends was found in CodeScene’s code health trends (see Code Health – How easy is your code to maintain and evolve?). There was a clear decline, starting shortly after the organization had scaled-up. The numbers were still not in the critical range, but there was a clear and problematic downwards trend, particularly in the feature areas responsible for most of the support issues.

Based on these findings, the organization acted by re-shaping their engineering and collaborative strategies. The product manager also decided to invest time into paying down the technical debt in the prioritized hotspots. Since the hotspots only made up a small part of the overall codebase, this effort gave a real effect over just two sprints. To ensure that the hotspots staid healthy, the organization also enabled CodeScene’s quality gates. By mesuring the delivery performance, management and engineering could see that their actions had a real effect.

No tool will ever save an organization, but situational awareness might.

Of course, no tool will ever save an organization, but situational awareness might. It’s key. So use the delivery performance measures to shine a light on your IT performance so that you can stay on top of the game.