The Rise of Worse is Better

Lisp: Good News, Bad News, How to Win Big Richard Gabriel, EuroPal 1990

While John Hughes was lamenting that the world at large didn’t understand the benefits of functional programming, Richard Gabriel was considering the reasons for the difficulties within the Lisp community: “Lisp has done quite well over the last ten years… yet the Lisp community has failed to do as well as it could have.” He presented his ideas at the  EuroPal conference in 1990 (Gabriel gives the back story on his dreamsongs site) and immediately sparked great controversy and debate. The part of the paper you want to zoom in on is section 2.1, “The Rise of Worse is Better.”

The key problem with Lisp today stems from the tension between two opposing software philosophies. The two philosophies are called The Right Thing and Worse is Better.

Common Lisp and CLOS (the Common Lisp Object System) were borne of the MIT/Stanford school of design, captured by the philosophy of ‘doing the right thing.’ This school values simplicity, correctness, consistency, and completeness.

  • Simplicity: the design must be simple, both in implementation and interface. It is more important for the interface to be simple than the implementation.
  • Correctness: the design must be correct in all observable aspects. Incorrectness is simply not allowed.
  • Consistency: the design must not be inconsistent. A design is allowed to be slightly less simple and less complete to avoid inconsistency. Consistency is as important as correctness.
  • Completeness: the design must cover as many important situations as is practical. All reasonably expected cases must be covered. Simplicity is not allowed to overly reduce completeness.

It’s interesting to pause for a moment here and consider where you stand on these four values. My personal biases tend to align with the first three statements, but I’d be more willing to trade-off completeness for simplicity. What about you?

There is another school of design described by Gabriel as the “New Jersey approach”, and favoured by the creators of C and Unix. If the worse-is-better advocates got together to write a manifesto, it might look like this:

Through our work we have come to value:

  • Simplicity of implementation over simplicity of interface
  • Simplicity of design over absolute correctness. It is slightly better to be simple than correct.
  • Pragmatic inconsistency (especially of interfaces) over implementation complexity.
  • Simplicity over completeness. Completeness must be sacrificed whenever implementation simplicity is jeopardized.

That is, while there is value in the items on the right, we value the items on the left more.

I have intentionally caricatured the worse-is-better philosophy to convince you that it is obviously a bad philosophy and that the New Jersey approach is a bad approach. However, I believe that worse-is-better, even in its straw man form, has better survival characteristics than the-right-thing, and that the New Jersey approach when used for software is a better approach than the MIT approach.

The worse-is-better approach tends to get simpler software into users hands earlier, and because it is easier to port, it can start to appear everywhere. “Unix and C are the ultimate computer viruses.”

It is important to remember that the initial virus has to be basically good. If so, the viral spread is assured as long as it is portable. Once the virus has spread, there will be pressure to improve it, possibly by increasing its functionality closer to 90%, but users have already been conditioned to accept worse than the right thing. Therefore, the worse-is-better software first will gain acceptance, second will condition its users to expect less, and third will be improved to a point that is almost the right thing.

That doesn’t sound far off many of the popular approaches in use today.

What about the-right-thing approach? It tends to lead to one of two basic scenarios: the big complex system, or the diamond-like jewel.

The big complex system scenario (e.g. Common Lisp):

First, the right thing needs to be designed. Then its implementation needs to be designed. Finally it is implemented. Because it is the right thing, it has nearly 100% of desired functionality, and implementation simplicity was never a concern so it takes a long time to implement. It is large and complex. It requires complex tools to use properly. The last 20% takes 80% of the effort, and so the right thing takes a long time to get out, and it only runs satisfactorily on the most sophisticated hardware.

The diamond-like jewel scenario (e.g. Scheme):

The right thing takes forever to design, but it is quite small at every point along the way. To implement it to run fast is either impossible or beyond the capabilities of most implementers.

What lesson is to be drawn from this?

… It is often undesirable to go for the right thing first. It is better to get half of the right thing available so that it spreads like a virus. Once people are hooked on it, take the time to improve it to 90% of the right thing.

As catalogued in his “Worse is Better” blog post, Richard Gabriel was to go backwards and forwards on this issue over the coming years, arguing alternately for worst-is-better, and for the-right-thing.

In 2000 an OOPSLA panel was convened to debate the question of whether worse was still better. Gabriel wrote a position paper arguing for the-right-thing. A month later, he wrote a second position paper arguing for worse-is-better!

Is worse still better? Does a speedily released imperfect approximation, refined through user interaction, beat the-right-thing? The marketplace often seems to say yes. But what are we sacrificing in pursuing this route? Is there still a role for the-right-thing and the pursuit of ‘perfection’?  How should we think about the trade-offs between simplicity, correctness, consistency, and completeness? More on this tomorrow…

5 thoughts on “The Rise of Worse is Better

  1. I’ve been reading through Gabriel’s “Patterns of Software” from two decades ago, and I find that some of his material is spot-on, and some of it is the wishful musings of that generation of computer scientist that doesn’t really understand that CS has to stop expecting to be able to meet capitalism on CS’s terms. Markets (customers, users, and everything in between) don’t care what you write code in, so long as the code works and runs well (where “well” means “it doesn’t crash too often, it doesn’t lose too much data, and it doesn’t take too long). Gabriel’s blind spot around some of this is echoed by similar kinds of things I’ve heard others (particularly linguists) say: standardization trumps its lack, this or that effort was what derailed the whole thing, and journalists with ignorant opinions somehow convinced the world that language “X” was doomed to fail. We’ve seen all of these things happen over and over again, and either linguists are totally helpless in the face of these obstacles, or those aren’t really the obstacles that are causing languages/platforms/environments to fail.

    Gabriel is usually good reading, but I find I take his analyses with large grains of salt when it comes to the things outside of the core areas of CS.

  2. @Ted: That’s probably smart in general, too. Cf. physicists opining about demography, etc.

    This is a great read. I assume that no small part of Gabriel’s, and this whole space’s, waffling between design philosophies (e.g., Waterfall vs. Agile, “Premature optimization is the root of all evil” (noting that that’s taken out of some helpful context)) is due to the pace at which goals and targets can be changed, thanks largely to technological enablement. Doing The Right Thing takes enough time that customers can change their minds about what are important features. Worse-Is-Better will still always be chasing that changing target.

    How do these approaches vary as regards technical debt, and of future-resistance (as opposed to future-proofing)? Worse-Is-Better sounds very nimble but also short-sighted, giving rise to a higher probability of architectural frailty in a short time. The Right Thing sounds more architecturally invested but perhaps to a fault.

  3. One problem is that the definition of ‘better’ changes over time; Take the Windows OS – once upon a time security was not very important, worst thing that can happen is that it crashes the system (happened quite often) – spending too much effort on security would have prevented them from cramming in another feature to please the customer. Then came the internet and security suddenly became very important!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.