Lisp: Good News, Bad News, How to Win Big Richard Gabriel, EuroPal 1990
While John Hughes was lamenting that the world at large didn’t understand the benefits of functional programming, Richard Gabriel was considering the reasons for the difficulties within the Lisp community: “Lisp has done quite well over the last ten years… yet the Lisp community has failed to do as well as it could have.” He presented his ideas at the EuroPal conference in 1990 (Gabriel gives the back story on his dreamsongs site) and immediately sparked great controversy and debate. The part of the paper you want to zoom in on is section 2.1, “The Rise of Worse is Better.”
The key problem with Lisp today stems from the tension between two opposing software philosophies. The two philosophies are called The Right Thing and Worse is Better.
Common Lisp and CLOS (the Common Lisp Object System) were borne of the MIT/Stanford school of design, captured by the philosophy of ‘doing the right thing.’ This school values simplicity, correctness, consistency, and completeness.
- Simplicity: the design must be simple, both in implementation and interface. It is more important for the interface to be simple than the implementation.
- Correctness: the design must be correct in all observable aspects. Incorrectness is simply not allowed.
- Consistency: the design must not be inconsistent. A design is allowed to be slightly less simple and less complete to avoid inconsistency. Consistency is as important as correctness.
- Completeness: the design must cover as many important situations as is practical. All reasonably expected cases must be covered. Simplicity is not allowed to overly reduce completeness.
It’s interesting to pause for a moment here and consider where you stand on these four values. My personal biases tend to align with the first three statements, but I’d be more willing to trade-off completeness for simplicity. What about you?
There is another school of design described by Gabriel as the “New Jersey approach”, and favoured by the creators of C and Unix. If the worse-is-better advocates got together to write a manifesto, it might look like this:
Through our work we have come to value:
- Simplicity of implementation over simplicity of interface
- Simplicity of design over absolute correctness. It is slightly better to be simple than correct.
- Pragmatic inconsistency (especially of interfaces) over implementation complexity.
- Simplicity over completeness. Completeness must be sacrificed whenever implementation simplicity is jeopardized.
That is, while there is value in the items on the right, we value the items on the left more.
I have intentionally caricatured the worse-is-better philosophy to convince you that it is obviously a bad philosophy and that the New Jersey approach is a bad approach. However, I believe that worse-is-better, even in its straw man form, has better survival characteristics than the-right-thing, and that the New Jersey approach when used for software is a better approach than the MIT approach.
The worse-is-better approach tends to get simpler software into users hands earlier, and because it is easier to port, it can start to appear everywhere. “Unix and C are the ultimate computer viruses.”
It is important to remember that the initial virus has to be basically good. If so, the viral spread is assured as long as it is portable. Once the virus has spread, there will be pressure to improve it, possibly by increasing its functionality closer to 90%, but users have already been conditioned to accept worse than the right thing. Therefore, the worse-is-better software first will gain acceptance, second will condition its users to expect less, and third will be improved to a point that is almost the right thing.
That doesn’t sound far off many of the popular approaches in use today.
What about the-right-thing approach? It tends to lead to one of two basic scenarios: the big complex system, or the diamond-like jewel.
The big complex system scenario (e.g. Common Lisp):
First, the right thing needs to be designed. Then its implementation needs to be designed. Finally it is implemented. Because it is the right thing, it has nearly 100% of desired functionality, and implementation simplicity was never a concern so it takes a long time to implement. It is large and complex. It requires complex tools to use properly. The last 20% takes 80% of the effort, and so the right thing takes a long time to get out, and it only runs satisfactorily on the most sophisticated hardware.
The diamond-like jewel scenario (e.g. Scheme):
The right thing takes forever to design, but it is quite small at every point along the way. To implement it to run fast is either impossible or beyond the capabilities of most implementers.
What lesson is to be drawn from this?
… It is often undesirable to go for the right thing first. It is better to get half of the right thing available so that it spreads like a virus. Once people are hooked on it, take the time to improve it to 90% of the right thing.
As catalogued in his “Worse is Better” blog post, Richard Gabriel was to go backwards and forwards on this issue over the coming years, arguing alternately for worst-is-better, and for the-right-thing.
In 2000 an OOPSLA panel was convened to debate the question of whether worse was still better. Gabriel wrote a position paper arguing for the-right-thing. A month later, he wrote a second position paper arguing for worse-is-better!
Is worse still better? Does a speedily released imperfect approximation, refined through user interaction, beat the-right-thing? The marketplace often seems to say yes. But what are we sacrificing in pursuing this route? Is there still a role for the-right-thing and the pursuit of ‘perfection’? How should we think about the trade-offs between simplicity, correctness, consistency, and completeness? More on this tomorrow…