A Robust Layered Control System for a Mobile Robot

A Robust Layered Control System for a Mobile Robot – Brooks 1985

With over 9,000 citations, “A Robust Layered Control System for a Mobile Robot” seems to be to the world of robotics as “On the Criteria to be used in Decomposing Systems into Modules” is to general software engineering.

We’re back in 1985, programming in Lisp at the MIT Artificial Intelligence Laboratory. The challenge is to build a robot that can wander around and explore an area, building a map of its surroundings. And the key question addressed by this paper is how best to decompose the software into modules. Take a look at the hardware being used in comparison to the Atlas robot we saw yesterday – things have come a long way in the intervening 30 years!

MIT AI Labs Robot c. 1985

A control system for such a robot has to satisfy a number of constraints:

  • The robot may have multiple goals, of differing priorities. Priorities may change over time – for example, getting off the railroad tracks when a train is coming takes priority over inspecting the last 10 track ties…
  • The robot may have multiple sensors that probably return inconsistent readings, it must be able to make decisions in the face of this.
  • The robot should be robust to failures – in some of its sensors, processors, and also with respect to drastic changes of environment.
  • It should be possible to extend the capabilities of the robot over time (more processing power, sensors etc.).

Additionally Brooks gives nine dogmatic principles for designing such a system. I especially like the first two:

  • “Complex (and useful) behavior need not necessarily be the product of an extremely complex control system. Rather, complex behavior may simply be the reflection of a complex environment. It may be an observer who ascribes complexity to an organism – not necessarily its designer.”
  • Things shoud be simple:

This has two applications. (1) When building a system of many parts one must pay attention to the interfaces. If you notice that a particular interface is starting to rival in complexity the components its connects, then either the interface needs to be rethought or the decomposition of the system needs redoing. (2) If a particular component or collection of components solves an unstable or ill-conditioned problem, then it is probably not a good solution from the standpoint of robustness of the system.

The fifth principle also contains some advice rediscovered by the authors of last week’s paper on robot swarms: “Relational maps are more useful to a mobile robot. This alters the design space for perception systems.”

There are many possible approaches to building an autonomous intelligent mobile robot. As with most engineering problems they all start by decomposing the problem into pieces, solving the sub-problems for each piece, and then composing the solutions. We think we have done the first of these three steps differently to other groups. The second and third steps also differ as a consequence.

The traditional approach at the time was to decompose the problem into vertical slices of functionality representing the data flow. Everything flows through a pipeline that looks like this:

Robot Layers Vertical

The slices form a chain through which information flows from the robot’s environment, via sensing, through the robot and back to the environment, via action, closing the feedback loop (of course most implementations of the above subproblems include internal feedback loops also). An instance of each piece must be built in order to run the robot at all. Later changes to a particular piece (to improve it or extend its functionality) must either be done in such a way that the interfaces to adjacent pieces do not change, or the effects of the change must be propagated to neighbouring pieces, changing their functionality too.

(A bit like changing a traditional web app often requires changes at each layer – which gets much more problematic if you divide into teams/modules along these boundaries).

Rather than slice the problem on the basis of internal workings of the solution, we slice the problem on the basis of desired external manifestations of the robot control system.

(Or as we might say today, divide the system into services that each encapsulate a complete piece of end-user functionality top-to-bottom, rather than by technology layers).

To this end, we have defined a number of levels of competence for an autonomous mobile robot. A level of competence is an informal specification of a desired class of behaviors for a robot over all environments it will encounter. A higher level of competence implies a more specific desired class of behaviors.

This leads to a decomposition that looks like this:

Robot Layers Horizontal

The levels of competence are:

  • 0, avoiding contact with objects
  • 1, wander aimlessly around without hitting things
  • 2, explore the world by seeing places in the distance that look reachable and heading for them
  • 3, build a map of the environment and plan routes from one place to another
  • 4, notice changes in the “static” environment
  • 5, reason about the world in terms of identifiable objects and perform tasks related to certain objects
  • 6, formulate and execute plans which involve changing the state of the world in some desirable way
  • 7, reason about the behavior of objects in the world and modify plans accordingly

The key idea of levels of competence is that we can build layers of a control system corresponding to each level of competence and simply add a new layer to an existing set to move to the next higher level of overall competence. We start by building a complete robot system which achieves level 0 competence. It is debugged thoroughly. We never alter that system. We call it the zeroth level control system. Next, we build another control layer, which we call the first level control system. It is able to examine data from the level 0 system and is also permitted to inject data into the internal interfaces of level 0 suppressing the normal data flow. This layer, with the aid of the zeroth layer, achieves level 1 competence. The zeroth layer continues to run unaware of the layer above it which sometimes interferes with its data paths. The same process is repeated to achieve higher levels of competence. We call this architecture a subsumption architecture…

Some benefits of such an approach:

  • You have a working control system for the robot very early on – additional layers can be added later, and the initial working system need never be changed.
  • Individual layers can be working on individual goals concurrency. The suppression mechanism then mediates the actions that are taken….
  • Not all sensors need to feed into a central representation – each layer can process data from the sensors it needs in its own fashion and use the results to achieve its own goals.
  • Robustness is achieved since lower levels that have been debugged well still continue to run when higher levels are added. If a higher level suppresses the outputs of lower levels the lower levels will still produce results that are sensible, albeit at a lower level of competence.
  • You can easily extend the architecture by making each new layer run in its own processor…

And within each competence layer, you are free to design the system in the way that makes the most sense for that layer:

But what about building each individual layer? Don’t we need to decompose a single layer in the traditional manner? This is true to some extent, but the key difference is that we don’t need to account for all desired perceptions and processing and generated behaviours in a single decomposition. We are free to use different decompositions for different sensor-set task-set pairs.

See the full paper for examples of this approach being worked through competence layers 0, 1, and 2.

If we were to try and define competence levels for a typical distributed application, what would they be I wonder?