Let's do a thought experiment about change over time. We will start with a "clean" codebase, organized into a bunch of modules. These can be classes, subsystems, namespaces… it doesn't matter for our purposes.
As time goes on, we have to make changes in the code. Suppose the changes are distributed randomly across the codebase. Some modules will get changed a lot more often than others. Empirically, the distribution of changes almost always follows a power law. That is, a few modules get changed very frequently while a decreasing number have an increasing interval between changes.
(In fact, concave shapes resembling a power law are very common in software. We see them in object lifespans, coupling, and the distribution of complexity across modules.)
We started with a clean codebase, but what does that mean? To me, it means that the code is simple, cohesive, and loosely coupled. In short, it is easy to change, and most changes are local.
With every change a developer makes, the module might lose cohesion, gain coupling, or become complex. These are entropy-increasing changes. On the other hand, the developer could split an incohesive module, decouple a coupled one, or factor out complexity. These are entropy-reducing changes.
Even with a very disciplined team, some proportion of changes will be entropy-increasing. We would all like to avoid them, but they happen anyway. I think of these as introducing stress into the codebase. The stress is trying to collapse the codebase into a Ball of Mud. We must eventually relieve that stress through refactoring. At every change, there is a chance that the module transitions from clean to dirty, or from dirty to clean.
If the odds of a clean-to-dirty transition and a dirty-to-clean transition were equal, then we should expect the codebase to reach a kind of thermodynamic equilibrium where dirtiness is conserved, but shuttled around. Sadly, the transition odds are not equal. The definition of clean means that it is easy to change. Dirty code is hard to change. We all intuitively understand that it is much harder to clean up a module than to dirty it in the first place. Therefore, the likelihood of clean-to-dirty transitions is higher than that of the reverse. From this alone, we could expect the fraction of dirty modules to increase over time. It is entropic decay.
There's a secondary effect at play that exacerbates this problem. Because it is easier to change a clean module than a dirty one, we will see some kinds of changes "sliding sideways" from a dirty module to a cleaner one. This is just another way of saying that kludges cluster. A developer who needs to change a dirty module in a difficult way is very likely to make a nearby change instead, especially under time or deadline pressure. That nearby change inevitably dirties the module it lands in, often by adding coupling.
If you imagine the modules in a codebase like cells in a Game of Life automaton, you could see cells fading from healthy blue to sickly red, then transmitting their disease elsewhere. Very occasionally, you'll see a cell or cluster of cells brightening to health as a developer restructures that area. Mostly, the codebase will decay until it must be discarded or rewritten.
Can this entropic decay be prevented?
Yes, but it is not easy. We know of several ways to avoid it.
1. Surgical anti-entropy. Have people consistently tackle the gnarliest, smelliest parts of the system and bring them back to health.
Some applications live for decades. These require strong anti-entropy measures. Others live for days or weeks. Dirty code is more expensive to maintain, but with such a short lifespan, who cares if these get dirty? Whatever your approach, think about the expected lifespan of the codebase. Be deliberate about your approach to entropy.