Simplicity is key to successful software

Michael Seifert, 2020-05-11, Updated 2020-10-01

For starters, let me make my point why simplicity is a key factor to successful software projects. As in my previous posts about long-term software stabilization and the dangers of neglecting software maintenance I would like to base my argument on Lehman's laws of software evolution:

V. Conservation of Familiarity (Perceived Complexity)

During the active life of a program the release content (changes, additions, deletions) of the successive releases of an evolving program is statistically invariant.

— Lehman (1980), Programs, Life Cycles, and Laws of Software Evolution

As a software system grows it gets more and more complex. All people associated with the software need to understand its interactions and features, what it can do and what it cannot. This process involves a certain cognitive complexity that limits the growth rate of a software.[1]

IV. Conservation of Organizational Stability (Invariant Work Rate) During the active life of a program the global activity rate in a programming project is statistically invariant.

— Lehman (1980), Programs, Life Cycles, and Laws of Software Evolution

Software projects usually start with a small team. As development slows down, or the software becomes more important, companies tend to assign more staff to the project. However, Lehman's fourth law explicitly states that we cannot expect an increase in development speed. This seems logical as the added staff also needs to fully understand the system as per the law of Conservation of Familiarity.

It seems we are at an impasse. The output of a product team is bound by the cognitive complexity of the software system and adding more people to the product team does not change the output. Then what does increase the productivity?

Simplicity does. Since we cannot increase the cognitive capacity of the product team, the only way to increase their productivity is to reduce the cognitive load. There are in fact several ways to achieve this, because complexity can be addressed at all levels.

The most effective measure is to reduce the complexity of the business process that a software should support. This addresses the heart of the matter, because the software cannot be simpler than the business process itself.

The constraints of the business process need to be considered by the software architecture. Good architectures prefer simple over complex solutions. In other words, choose a monolithic design over a distributed system, if possible.

The architectural design is then implemented by the development team. They benefit a lot from automation, because it supports them in different ways. For one, automated processes such as build pipelines spare the repeated execution of the same manual steps. This saves time in the long run. For another, automation pipelines also represent a process, for example rolling out a software release. Therefore, the pipelines also serve as an abstraction for a set of tasks, and the steps don't have to be considered individually any longer.

These are just examples how complexity can be managed. Generally, complexity needs to be managed at all levels of a software project, but it's more effective the "further up" it's done. The key takeaway is this: The simpler a software is designed, the easier it is to understand. Easy to understand software requires less effort to evolve and adjust. Ultimately, this results in significantly reduced maintenance costs.


  1. Lehman et al. (1997), Metrics and Laws of Software Evolution - The Nineties View ↩ī¸Ž