Loading…

Code quality: a concern for businesses, bottom lines, and empathetic programmers

Code quality affects the mental state of a programmer, communication within their team, and the incentives attached to their work. Improve your code and you can improve your organizational health and competence as a whole.

Article hero image

The value of high-quality code can be difficult to communicate. Some managers see it as a boondoggle, an expensive hobby for overly fastidious programmers, since investing in code quality can slow development over the short term and doesn’t appear to alter the user experience. But nothing could be further from the truth.

It’s true that tech companies with a poor organizational understanding of code quality can launch quickly and see success in the short term. But in doing so they incur an invisible debt that grows every time the code is altered. This debt does not stay intangible for long. Once the product exceeds a very low threshold of complexity, the debt comes due, gradually consuming the productivity of their development team and the usability of their software. When we speak of “technical debt,” these are the dangers we’re talking about.

Organizations that produce software sit at the intersection of a thousand different variables. Anything that affects the mental state of a programmer, communication within their team, or the incentives attached to their work is likely to be reflected in their code. Promoting code quality, then, is partially a matter of improving organizational health and competence as a whole. In this article I’ll briefly define code quality and explain how it impacts the whole company, then zero in on a few organizational habits that are effective in increasing it.

What is code quality?

In a way, learning to code is learning to empathize with a machine: its intense attention to detail, its need for matched parentheses and consistent capitalization, and the helplessness of its error states. These needs can be so foreign and opaque that the most experienced of us still spend hours, even days, chasing down bugs that amount to just a few characters of source code. This is taxing. Sometimes empathizing with our machines is as much as we can handle. In those situations we fall back on a simple metric: does it do the job? Since the foremost purpose of code is to do a job, there are times when we don’t look beyond that.

This approach isn’t sustainable for any but the smallest of projects, though. Code is not sudoku, with a single correct solution for each problem. There are infinite ways to write any computation task, and some are simpler and more predictable than others. Small differences here add up over time. Code is written once and read a thousand times. Programmers read code when they’re fixing bugs or adding functionality. They read code to remember how their applications work. They read code to discover patterns they can reuse elsewhere. The only thing that’s done to code more often than reading it is executing it. This is the fundamental reason why we concern ourselves with code quality. The usefulness of a piece of code has a great deal to do with its impact on the people who will read it. When we write good code, we’re saving them time and effort. We’re making their jobs easier. We’re making an investment that will pay dividends day after day, year after year, until the application reaches the end of its life.

So, ironically, once we’ve trained our minds to empathize with computers so we can write working code, it’s then our responsibility to remember how to empathize with humans so our code doesn’t frustrate them.

What does code quality look like in mechanical terms? Multiple books have been written on the subject, so I won’t attempt to explain in-depth. But to give an overview, high-quality code is code that can be understood quickly. If a programmer can pick a method or class from a codebase at random and understand it deeply in a few minutes—not just its functionality and business logic, but everything it depends on and every way it might be used—without consulting too many other files, then the codebase is probably high-quality. Once this is achieved, the question of whether the code works correctly is far less concerning; it can be changed, fixed, or deleted without much risk or effort.

Granted, I’m describing a philosophical ideal here. In a real-world application there will always be pieces that are unavoidably complex or confusing. But even these pieces can vary widely in quality. There is no situation where code quality is completely out of the programmer’s hands.

A few of the most important factors in code quality are:

  1. Encapsulation. High-quality code is most often made up of self-contained components: one cannot change the behavior of a self-contained component by modifying something outside of it, nor does the component modify things that are external to it (within reason—it makes sense for a component to read and update a database if that’s understood to be its job). This saves development time because when a component needs to be fixed, updated, or deleted, programmers spend less time searching for external causes and effects.
  2. Idiomatic code. Modern programming languages have built-in syntax and methods for the most common tasks, like converting a string to a number or determining if a collection contains a particular element. These are more reliable, performant, and widely understood than anything a programmer might write from scratch, and require far less code than a custom method. Code written idiomatically—that is, using the conventions and built-in features of a language as much as possible—is more readable and requires less maintenance.
  3. Meaningful names. Variables and methods throughout a codebase are named by the programmers who write them. Meaningless names, like `x` or `fn`, require the programmer to understand and remember extra layers of context while reading code that uses them. If there are more than a few of these in one place it becomes impossible to keep everything in working memory. High-quality code uses specific and descriptive variable names, like `departmentName` or `getAnnualExpenses`. Although it is possible to be too verbose here, a programmer reading code for the first time is better served by a name that says too much than one that says too little.
  4. Low cyclomatic complexity. Any time a computer makes a decision, as with an if statement or a for loop, another layer of meaning is added to the code that follows: under one condition, the code will be executed or repeated; under another condition, it will be skipped. “Cyclomatic complexity” is a metric referring to the number of decisions that exist in a process. As with the previous point, a programmer’s working memory can become a bottleneck as layers of meaning accumulate. While decisions are essential to the usefulness of an application, high-quality code minimizes decision points and the code they contain, and avoids nesting them within each other as much as possible.

It's important to note that while code quality includes a few factors that can be measured, every concrete measure of code quality is imperfect and easy to game. There is no product or tool that can automatically and definitively rate the quality of a codebase. However, there are heuristics you can use to get a feel for code quality and see where issues may lie.

Unit tests, for example, are pieces of code that test application behavior. They do so in a fast and repeatable way, ensuring that the application continues to work correctly as it grows and changes. Unit tests are more likely to accompany high-quality code than low-quality code. They reward and encourage quality, since the things that make for good code—encapsulation, loose coupling, conciseness, simplicity, and so on—also make for easier testing. This changes the path of least resistance for programmers: if they're writing unit tests for previously-untested code, sometimes it will be easier to simplify and rearrange that code than to test it in its current form. And if programmers know they will be writing unit tests as a standard part of the development process, they're incentivized to write better code in the first place.

The impacts of code quality

The nature of software applications is to grow, not just in size but in complexity. Complexity comes in many forms. Some of them are the result of low-quality code, but others are a fundamental part of the application's functionality and value. Software complexity has business value for a few reasons:

  • The complexity of a problem space can be absorbed by software processes. This complexity is transformed, not destroyed. Software is never less complex than the problem it actually solves (although well-built software hides this fact).
  • A product can adapt to an ever-increasing variety of use cases by adding complexity (often during the deployment phase). This creates a larger market for the product by hiding or customizing features based on their relevance to specific industry segments, companies, or users.
  • Good software is built to high standards of accessibility, observability, resilience, and visual appeal. All of these things require additional code and therefore increased complexity.

Complexity that adds value or increases competitive advantage is “good complexity.” Good complexity is the reason we build software. Complexity that makes code difficult to understand, maintain, and build on is “bad complexity.” Bad complexity is a major reason why software products fail. Good complexity is part of the software’s purpose and design. Bad complexity is incidental, appearing when programmers or their managers make mistakes during implementation (I’ll discuss what causes this later on).

A useful way to define code quality is the ratio of bad complexity to good complexity. There are major advantages to keeping this ratio low—what I’ve been referring to as “high-quality code.”

First, it keeps development and maintenance costs low over the long term. Consider the following graph:

A graph tracking two curves. The x-axis is time and the y-axis is cumulative functionality. There is a vertical dashed line center-left on the x axis representing an inflection point, labeled "this point occurs in weeks (not months)." The first curve, in brown, represents software with low internal quality; it rises quickly until the inflection point, then nearly levels out. The second curve, in blue, represents software with high internal quality; it is lower than the other until the inflection point, where it rapidly outpaces it.

Source: “Is High Quality Software Worth the Cost?” by Martin Fowler

For any project that lasts more than a few weeks, development speed is partially dependent on code quality. This is a significant factor in total cost of ownership—in other words, the company’s bottom line is directly at risk. One study found that complexity accounts for 25% of the cost of software maintenance and over 17% of total software lifecycle costs. And a 2018 poll by Stripe found the worldwide cost of dealing with “bad code” to be as high as $85 billion per year. These numbers only address costs in terms of development time and payroll; the costs of shipping defective software or being beaten to market by a competitor are also worth considering. Since the effects of bad code persist until it is removed, the only sensible approach is to deal with it early and often.

Second, code quality is a factor in job satisfaction among programmers. Companies that want to attract world-class talent, increase employee engagement, and limit turnover can’t afford to ignore this relationship. Turnover in particular is a studied factor in the success or failure of software projects. This is likely because programmers make more mistakes on projects they are less familiar with.

Third, code quality drives product quality. High-quality code is less likely to have bugs, which are a major driver of user complaints. Code quality has also been correlated to commercial success and reduced security vulnerabilities. Code quality certainly isn’t the only factor in a product’s performance on the market, and it likely isn’t the most important factor either. But even the most competent organizations will struggle to market a product that’s bug-ridden or vulnerable to hacks.

How organizations can increase code quality

Code quality requires investment. It can’t be increased without the commitment of resources—money, development time, and deadline negotiation being the most prominent of these. It also can’t be increased by the use of threats, punishment, or overtime; code can’t be improved under the conditions that lead to bad code in the first place! The good news is that refactoring—the practice of actively increasing the quality of a codebase—does not require all other development activities to cease. It can become a regular and minimally-disruptive part of the development process, something the team attends to while continuing to build and improve features.

Following are some organizational habits and competencies that promote higher-quality code.

Sustainable pace

Programming is difficult under the best of circumstances. As mentioned earlier, a programmer’s ability to code is profoundly dependent on their mental state. Happiness, sufficient rest, and low stress levels are all important ingredients.

The first way to ensure programmers have enough mental resources to do their best work is to avoid overtime. The quality of a programmer’s code drops by the end of a standard eight-hour workday, let alone in the hours beyond. Many programmers have had the experience of coding while excessively tired, stressed, or sick, then discovering the next day that all their work has to be undone or rewritten. This is “net negative work,” code so low-quality that it actually increases the amount of work remaining on the project. Nothing can compensate for a lack of rest over the long term—neither caffeine, nor micro-napping, nor ping-pong tables, nor free beer can replicate the benefits of a restful evening and a good night’s sleep.

Companies that foster burnout by insisting on constant overtime are not just abusive, they’re self-defeating. Any initial gains in productivity will be rapidly drowned out by poor quality. On the other hand, companies that pace their development teams on a short, sustainable and flexible workday, allowing plenty of time off and sick leave, can expect consistent and high-quality output over the long term.

A common cause of overtime in otherwise competent organizations is deadline pressure. Deadlines are often based on financial motivations or scheduling concerns rather than the realities of software development. Programmers can’t speed up a project by thinking harder or typing faster. When a deadline approaches they generally take the only shortcut available to them, which is reduction of quality. This can be avoided by careful planning and strategizing around deadlines.

The first part of this is estimation. Software estimation is a complex skill that requires specific training. Most programmers and managers have not had this training. In the absence of estimation skills, teams are at risk of falling back on oversimplified formulas like “guess how long it will take, then double that guess, then double it again.” The best case scenario here is that the project is done early and project managers are left improvising ways to fill leftover development time. But the worst case scenario is that the project is delivered late or barely functional. So if an organization lacks the time or the will to build software estimation skills, it’s far better to negotiate for extremely generous deadlines than to end up disappointing customers.

The second part of deadline management is prioritization. Prioritization is not just about which things are built first; it’s about which things are built at all. Every development roadmap is liable to include features that are overvalued (difficult to build and not especially valuable to users) or undervalued (easy to build and very valuable to users). An effective manager will focus their team’s efforts on the latter end of the spectrum, especially as deadlines loom close.

If deadlines are managed well, they can be an effective motivator toward the team’s software delivery goals while also leaving enough time for things to be built right. Software quality, as much as anything else, requires dedicated time.

Refactoring as part of the development cycle

Technical debt, like a weed, springs up regardless of what we may do to prevent it. While some development practices can reduce it substantially, there is no way to eliminate it without the benefit of hindsight. Keeping it to a minimum means regularly taking time to find, discuss, and fix it.

Many organizations use the “20% rule” as a guide: 80% of development time is spent on feature development or bugfixes and 20% is spent on refactoring. This principle works best when applied loosely, since teams can’t predict exactly how long a given task will take. And there may be occasional cycles when technical debt has become such a hindrance that it requires a team’s full attention, or cycles when there are pressing feature releases to attend to and technical debt has to take a back seat. The organization should take care not to fall into the trap of doing either of these for longer than necessary.

For the best results, technical debt tasks should be first-class citizens of the planning process. That is, they should exist alongside features, bugs, and testing tasks in the team’s planning software (or on their whiteboard, as the case may be). The 20% rule can be adopted as simply as “one out of every five tasks is set aside for refactoring.” This doesn’t necessarily require technical debt tasks to be planned and scoped to the same level of detail as feature tasks. Rather, the goal is to ensure that the team’s refactoring work is visible and the time they spend on it is protected.

Technical leadership and review

Code quality is second nature to many programmers. They recognize low-quality code, understand its effects, and know how to fix it. To other programmers, the concept is entirely foreign. If a project is staffed entirely by the latter, it doesn’t take long for it to end up mired in slow development and technical issues.

What’s the difference between these two groups of programmers? The answer is simple: study and intentional practice. Unfortunately, neither colleges nor bootcamps nor years of industry experience can be relied on to instill the relevant skills—nothing on a programmer’s resume will necessarily set them apart. But this shouldn’t keep hiring managers up at night. Code quality and refactoring skills can be learned at any stage of a programmer’s career and are far less complex than many of the other concepts they deal with. The quality of a programmer’s work can increase dramatically over the course of a few months if they have access to the right resources and are willing to improve.

What’s more, a software organization can thrive even when their programmers are unevenly skilled. If practiced programmers are chosen to lead the architecture, design, and planning of software projects, their understanding of code quality will be part of the DNA of the development cycle. This kind of technical leadership is essential to high-quality software.

Skill gaps can also be bridged with various forms of review:

  • Design reviews happen after a task is specified but before any code is written. The programmer assigned to the task writes a document outlining their approach: the files, classes, and methods they plan to update; the patterns and techniques they will use; and any uncertainties they may have about design or business logic. Then another programmer gives feedback on that document, which may include correcting misconceptions and suggesting helpful patterns or methods that already exist in the codebase. The time taken by this process is sometimes substantial, but it’s a worthwhile investment. Catching a defect this early in the development process is extremely cheap relative to the cost of discovering and fixing it later.
  • Pair programming is the practice of assigning two programmers to complete a task together in real time. One programmer controls the keyboard and mouse while the other programmer gives directions. This allows for instantaneous review, feedback, and knowledge transfer during development. Studies have found mixed results as to the raw productivity of two programmers working together versus solo (although it seems to offer a more consistent productivity boost to inexperienced programmers). However, when code quality is taken into consideration, pair programming is almost universally recognized as advantageous.
  • Code reviews happen after code is written but before it’s shipped as part of the product. A programmer makes their code changes available to the rest of the team. Then another programmer reads them and gives feedback, often suggesting changes that should be made before the code is shipped. Many code quality issues can only be recognized by a human. This is the last opportunity to catch those before they become part of the product.

If team members are frequently required to check each other’s work and there are no major hostilities or communication failures between them, the quality of their collective work will rise toward the level of the most practiced programmer’s.

Code quality is a competitive advantage

Research on code quality is not scarce (if the 25+ studies linked above aren’t enough, there are plenty more). Time and time again it’s been demonstrated to be a factor in project success or failure, time-to-market, and product longevity. Organizations that prioritize healthy codebases are prioritizing their customers, their programmers, and their own financial viability.

Code quality is also one of the great gifts we programmers can give to ourselves and our colleagues. If we spend a few extra moments naming a variable, rewriting deeply-nested logic, or making a function more predictable, those moments will be paid back in full many times over as we interact with that code over the life of the project. Thinking about how our code affects computers is the bare minimum; thinking about how our code affects humans is a powerful form of empathy in practice, one that no organization can afford to ignore.

Login with your stackoverflow.com account to take part in the discussion.