The Design Philosophy of the DARPA Internet Protocols

The Design Philosophy of the DARPA Internet Protocols – Clark 1988

While there have been papers and specifications that describe how the (internet) protocols work, it is sometimes difficult to deduce from these why the protocol is as it is. For example, the Internet protocol is based on a connectionless or datagram mode of service. The motivation for this has been greatly misunderstood. This paper attempts to capture some of the early reasoning which shaped the Internet protocols.

Understanding the underlying principles behind something can turn what might on the surface seem to be simply a collection of facts into a chain of causes and consequences that makes it much easier to see how those parts fit together. Clark provides us with some of those insights for the design of the Internet Protocols, working from the goals towards the implementation consequences.

The top level goal for the DARPA Internet Architecture was to develop an effective technique for multiplexed utilization of existing interconnected networks.

This implied integrating networks spanning different administrative boundaries of control. And since the intial networks to be connected used packet switching, packet switching was adopted as a fundamental component of the internet architecture. From the ARPANET project the notion of store-and-forward packet switching for interconnects was well understood.

From these assumptions comes the fundamental structure of the Internet: a packet switched communications facility in which a number of distinguishable networks are connected together using packet communications processors called gateways which implement a store and forward packet forwarding algorithm.

Design of the Internet

The top level goal says ‘what’ is to be achieved, but says very little about the desired characteristics of a system that accomplishes it. There were seven second-level goals, which are presented below in priority order.

  1. Internet communication must continue despite loss of networks or gateways.
  2. The Internet must support multiple types of communications service.
  3. The Internet architecture must accommodate a variety of networks.
  4. The Internet architecture must permit distributed management of its resources.
  5. The Internet architecture must be cost effective.
  6. The Internet architecture must permit host attachment with a low level of effort.
  7. The resources used in the internet architecture must be accountable.

These goals are in order of importance, and an entirely different network architecture would result if the order were changed. For example, since this network was designed to operate in a military context, which implied the possibility of a hostile environment, survivability was put as a first goal, and accountability as a last goal.

It turns out that the top three goals on the list had the most influence on the resulting design. See the full paper (link at the top) for reflections on the remaining four.

Surviving in the face of failure

If two entities are communicating over the Internet, and some failure causes the Internet to be temporarily disrupted and reconfigured to reconstitute the service, then the entities communicating should be able to continue without having to reestablish or reset the high level state of their conversation.

The only error the communicating parties should ever see is the case of total partition. If the application(s) on either end of the connection are not required to resolve any other failures, then the state necessary for recovery must be held in the lower layers – but where? One option is to put it in the intermediate nodes in the network, and of course to protect it from loss it must be replicated. I think the knee-jerk reaction of many system designers today might be to distribute the state in some such manner, maybe using a gossip-protocol. But the original designers of the internet had a insight which enabled a much simpler solution, and they called it ‘fate sharing.’

The alternative, which this architecture chose, is to take this information and gather it at the endpoint of the net, at the entity which is utilizing the service of the network. I call this approach to reliability “fate-sharing.” The fate-sharing model suggests that it is acceptable to lose the state information associated with an entity if, at the same time, the entity itself is lost. Specifically, information about transport level synchronization is stored in the host which is attached to the net and using its communication service.

Two consequences of this are that the intermediate nodes must not store any (essential) state – leading to a datagram (stateless packet switching) based design, and that the host becomes an important trusted part of the overall solution.

Handling multiple types of communications service

Debugging protocols and VOIP were the first two use cases that suggested something more than just TCP might be needed. You most want the debugger to work precisely when things are going wrong – so a model that says it first requires a fully reliable transport is not a good one! It’s much better to make do with whatever you can get. When it comes to VOIP, regular delivery of packets (even if it means losing some) is more important than reliability for a good user experience.

A surprising observation about the control of variation in delay is that the most serious source of delay in networks is the mechanism to provide reliable delivery!

It was thus decided… to split TCP and IP into two layers.

TCP provided one particular type of service, the reliable sequenced data stream, while IP attempted to provide a basic building block out of which a variety of types of service could be built… The User Datagram Protocol (UDP) was created to provide a application-level interface to the basic datagram service of Internet.

Accommodating a variety of networks.

The easiest way to accommodate a wide variety of networks, is to make the requirements for integrating a network as simple as possible. This boils down to: being able to transport a packet or datagram of reasonable size (e.g. 100 bytes), reasonable but not reliable delivery, and some form of addressing.

There are a number of services which are explicitly not assumed from the network. These include reliable or sequenced delivery, network level broadcast or multicast, priority ranking of transmitted packet, support for multiple types of service, and internal knowledge of failures, speeds, or delays.

On datagrams

There is a mistaken assumption often associated with datagrams, which is that the motivation for datagrams is the support of a higher level service which is essentially equivalent to the datagram. In other words, it has sometimes been suggested that the datagram is provided because the transport service which the application requires is a datagram service. In fact, this is seldom the case.

The importance of datagrams instead stems from:

  1. Eliminating the need for connection state in intermediate nodes
  2. Providing a building block on top of which a variety of services can be built
  3. Representing the minimum network server assumption, enabling a wide variety of networks to be easily incorporated.