# Why Functional Programming Matters

Why Functional Programming Matters John Hughes, Research Topics in Functional Programming, 1990 (based on an earlier Computer Journal paper that appeared in 1989).

1989/1990 must have been a fairly dark time to be a functional programmer. Object-oriented programming was rising in prominence and the dream that industry would pay attention to functional programming looked like it was getting further away, not closer. What a time then to publish “Why Functional Programming Matters,” John Hughes’ powerful argument for why the mainstream might be making a mistake in ignoring functional programming. You can almost hear the lone voice in the corner saying “hey, over here!”

This paper is an attempt to demonstrate to the larger community (of non-functional) programmers the significance of functional programming, and also to help functional programmers exploit its advantages to the full by making it clear what those advantages are.

The version of the paper I’m referencing runs to 23 pages, but the bulk of that is examples illustrating the author’s points. The central argument itself is fairly concise, and most gloriously argued, and tying in with a theme we’ve been bouncing around over the last week or so, it all comes back to modularity.  You’ll have to forgive me for extensive quoting from the opening sections of the paper today, Hughes’ prose is just too good not to!

It is now generally accepted that modular design is the key to successful programming… However, there is a very important point that is often missed. When writing a modular program to solve a problem, one first divides the problem into subproblems, then solves the subproblems, and finally combines the solutions. The ways in which one can divide up the original problem depend directly on the ways in which one can glue solutions together. Therefore, to increase one’s ability to modularize a problem conceptually, one must provide new kinds of glue in the programming language.

But we’re getting ahead of ourselves… When most people sum up the advantages of functional programming argues Hughes, they talk about the fact that functional programs are side-effect free and contain no assignment statements. Expressions can therefore be evaluated at any time and replaced by their values, and vice-versa – programs are “referentially transparent.” People still talk a lot about the value of immutability and side-effect free computing today, and of course these things have value. Yet it’s not a good way to explain FP to non-believers says Hughes:

Such a catalogue of “advantages” is all very well, but one must not be surprised if outsiders don’t take it too seriously. It says a lot about what functional programming isn’t (it has no assignment, no side effects, no flow of control) but not much about what it is. The functional programmer sounds rather like a medieval monk, denying himself the pleasures of life in the hope that it will make him virtuous. To those more interested in material benefits, these “advantages” are totally unconvincing.

Oh, but there are benefits says the functional programmer, functional programs are an order of magnitude shorter and therefore functional programmers are more productive.

Yet why should this be? The only faintly plausible reason one can suggest on the basis of these “advantages” is that conventional programs consist of 90% assignment statements, and in functional programs these can be omitted” This is plainly ridiculous. If omitting assignment statements brought such enormous benefits then FORTRAN programmers would have been doing it for twenty years.

If this characterization of functional programming is inadequate, what is it that can both explain the power of functional programming and also give a clear indication of what the functional programmer should strive towards. Think back to the time when structured programming was new (you remember that right?) the advantages says Hughes were summed up more or less as “structured programs contain no goto statements.” This is similar to the negative ‘advantages’ advocated for functional programming.

With the benefit of hindsight, it’s clear that these properties of structural programs, although helpful, do not get to the heart of the matter. The most important difference between structured and unstructured programs is that structured programs are designed in a modular way. Modular design brings with it great productivity improvements:

• Small modules can be coded quickly and easily
• General purpose modules can be reused, leading to faster development of subsequent programs
• The modules of a program can be tested independently, helping to reduce the time spent debugging.

The absence of gotos helps with programming ‘in the small’, but modular design helps with programming ‘in the large.’ And now we’re back at the opening quote: “The ways in which one can divide up the original problem depend directly on the ways in which one can glue solutions together.”

We shall argue in the remainder of this paper that functional languages provide two new, very important kinds of glue… This is the key to functional programming’s power – it allows improved modularization. It is also the goal for which functional programmers must strive – smaller and simpler and more general modules, glued together with the new glues we shall describe.

The two new glues are

• Higher-order functions, and
• Lazy evaluation

Higher-order functions enable simple functions to be glued together to make more complex ones. I’m going to assume most readers are familiar with this idea. The example given in the paper is foldr, which captures a common computation pattern over lists, such that we can write for example:

sum = foldr (+) 0
product = foldr (*) 1
anytrue = foldr or False
alltrue = foldr and True
length = foldr count 0 // where count a n = n + 1
map f = foldr (Cons .f) Nil
summatrix = sum . map sum

And many more examples (the programming language used in the paper is Miranda).

These examples should be enough to convince the reader that a little modularization can go a long way. By modularizing a simple function (sum) as a combination of a higher-order function and some simple arguments, we have arrived at a part (foldr) that can be used to write many other functions on lists with no more programming effort.

This doesn’t just apply to lists of course, you can build higher order functions for working with all kinds of data structures.

All this can be achieved because functional languages allow functions that are indivisible in conventional programming languages to be expressed as a combinations of parts — a general higher-order function and some particular specializing functions. Once defined, such higher-order functions allow many operations to be programmed very easily. Whenever a new datatype is defined, higher-order functions should be written for processing it. This makes manipulating the datatype easy, and it also localizes knowledge about the details of its representation.

Lazy evaluation requires a little more thinking to see why Hughes classes it as a modularization mechanism: lazy evaluation makes it practical to modularize a program as a generator that constructs a large number of possible answers, and a selector that chooses the appropriate one. Without lazy evaluation this would not always be practical (or even possible, in the case of infinite generators).

We have described lazy evaluation in the context of functional languages, but surely so useful a feature should be added to non-functional languages – or should it? Can lazy evaluation and side-effects coexist? Unfortunately they cannot: adding lazy evaluation to an imperative notation is not actually impossible, but the combination would make the programmer’s life harder rather than easier.

Something to think about for those languages that like to blend functional and non-functional constructs…

Why exactly does it make the programmer’s life harder though?

Because lazy evaluation’s power depends on the programmer giving up any direct control over the order in which the parts of a program are executed, it would make programming with side effects rather difficult, because predicting in what order – or even whether – they might take place would require knowing a lot about the context in which they are embedded. Such global interdependence would defeat the very modularity that – in functional languages – lazy evaluation is designed to enhance.

There follows in the paper a series of examples demonstrating the power of lazy evaluation and higher-order functions: Newton-Raphson square roots; numerical differentiation and integration; and an alpha-beta heuristic for evaluating game trees. In all cases it’s impressive how quickly very powerful and expressive levels of abstraction are reached, and they are well worth studying.