# Exploring Complex Networks

Exploring Complex Networks – Strogatz 2001

Network anatomy is important to characterize because structure always affects function…

Written in 2001, this article – recently recommended by Werner Vogels in his ‘Back-to-Basics‘ series – explores the topic of complex networks. It turns out that the behaviour of individual nodes, and the way that we connect them together, tells us a lot about the emergent properties of the resulting system.

### Non-linear dynamics primer

Let the state of some system be modelled by a vector of state variables, *x*, with *x(t)* representing the state at time t. Let *v(x)* be a vector of functions that encode the dynamics of the system (how it changes over time). The behaviour of the resulting dynamical system can be modelled by differential equations dx/dt = v(x).

As the system evolves x(t) flows through state space, guided by the ‘velocity’ field dx/dt = v(x) like a speck carried along in a steady, viscous fluid…

There are three possibilities:

- If x(t) comes to rest at some point, then the velocity there must be zero and we call it a
*fixed point*and it corresponds to an equilibrium state for the system. If small disturbances from this point ‘damp out’, then it is known as a*stable*fixed point. - x(t) may flow towards a closed loop and eventually circulate around it forever. This is known as a
*limit cycle*. - x(t) might settle onto a
*strange attractor*– a set of states on which it wanders forever, never stopping or repeating. If two nearby states flow away from each other exponentially fast we call the system*chaotic*.

### Networks of dynamical systems

Consider a network in which each node is a dynamical system. The long-term behaviour of each node in isolation will be given by stable fixed points, limit cycles, or attractors. But what can we say about their collective behaviour when we couple them together?

If the dynamical system at each node has stable fixed points and no other attractors, the network tends to lock into a static pattern. Many such patterns may coexist, especially if the nodes have competing interactions. In that case the network may become frustrated and display enormous numbers of locally stable equilibria. This kind of complex static behaviour is seen in models of spin glasses, associative memory neural networks and combinatorial optimization problems.

If each node has a chaotic attractor, then there are few rules. But one curious phenomenon exists for networks of identical chaotic systems:

It is known that networks of identical chaotic systems can synchronize their erratic fluctuations… For a wide range of network topologies, synchronized chaos requires that the coupling be neither too weak nor too strong; otherwise spatial instabilities are triggered.

The intermediate case, in which each node has a stable limit cycle, is more fruitful. When nodes are coupled by smooth interactions similar to diffusion then:

Arrays of identical oscillators often synchronize, or else form patterns that depend on the symmetry of the underlying network. Other common modes of organization are travelling waves in one spatial dimension, rotating spirals in two dimensions and scroll waves in three dimensions. For fully connected networks where each node is coupled equally to all the others, the completely synchronized state becomes likely.

Instead of smooth interactions though, “Many biological oscillators communicate by sudden impulses: a neuron fires, a firefly flashes, a cricket chirps…” – and many computer networks communicate by sending messages. *Peskin* investigated this scenario. *If each oscillator is connected to all others, they end up firing in unison*. It is conjectured that this is also true if the oscillators are not quite identical.

With weaker coupling than in the fully connected model, *Winfree* discovered that there is a ‘temporal analogue of a phase transition.’ With a network of weakly coupled, nearly identical limit-cycle oscillators the system behaves incoherently…

As the coupling is increased, the incoherence persists until a certain threshold is crossed – then a small cluster of oscillators suddenly ‘freezes’ into synchrony. For still greater coupling, all the oscillators become locked in phase and amplitude.

This model was further refined by *Kuramoto*.

### Network architectures

In a simple random graph with n nodes and m links the expected structure of the graph changes as a function of *m*.

Erdös and Rényi studied how the expected topology of this random graph changes as a function of m. When m is small, the graph is likely to be fragmented into many small clusters of nodes, called components. As m increases, the components grow, at first by linking to isolated nodes and later by coalescing with other components. A phase transition occurs at m=n/2, where many clusters crosslink spontaneously to form a single giant component. For m > n/2, this giant component contains on the order of n nodes (more precisely, its size scales linearly with n, as n → ∞), while its closest rival contains only about log n nodes. Furthermore, all nodes in the giant component are connected to each other by short paths: the maximum number of ‘degrees of separation’ between any two nodes grows slowly, like log n

Many real world networks contain a mixture of order and randomness. Watts and Strogatz studied this model by starting with a regular lattice and then replacing the original links with random ones with some probability p.

They found that the slightest bit of rewiring transforms the network into a ‘small world,’ with short paths between any two nodes, just as in the giant component of a random graph. Yet the network is much more highly clustered than a random graph, in the sense that if A is linked to B and B is linked to C, there is a greatly increased probability that A will also be linked to C.

The short path and high clustering properties also hold for many natural and technological networks.

Furthermore, they conjectured that dynamical systems coupled in this way would display enhanced signal propagation speed, sychronizability and computational power, as compared with regular lattices of the same size. The intuition is that the short paths could provide high-speed communication channels between distant parts of the system, thereby facilitating any dynamical process (like synchronization or computation) that requires global coordination and information flow.

In real networks, some nodes are more highly connected than others. The *degree* of a node is the number of links that it has. Many networks have a degree distribution that is highly skewed and decays as a power law. For example, the world-wide web has a small number of nodes with many links, and a long tail of nodes with very few links.

Albert and Jeong have dubbed these networks ‘scale-free,’ by analogy with fractals, phase transitions, and other situations where power laws arise and no single characteristic scale can be defined.

An interesting property of such networks is their resistance to *random* failures (e.g. of nodes on AWS!).

Albert, Jeong and Barabási suggested that scale-free networks are resistant to random failures because a few hubs dominate their topology Any node that fails probably has small degree (like most nodes) and so is expendable. The flip side is that such networks are vulnerable to deliberate attacks on the hubs. These intuitive ideas have been confirmed numerically and analytically by examining how the average path length and size of the giant component depend on the number and degree of the nodes removed.

Let the power law governing degree distribution be p_{k} ~ k^{-γ}. Then if γ < 3.47 (which holds for most scale-free networks measured so far) then a giant component will exist. A component is a set of connected nodes with a path from each node to every other node. If γ < 1 then the network forms one huge connected piece. For the world wide web (at least as of 2001) γ is approximately 2.2.

If this topic interests you, I can also recommend Matthew Jackson’s book on Social and Economic Networks from 2010.

## Trackbacks