Outperforming LRU with an Adaptive Replacement Cache Algorithm

Outperforming LRU with an Adaptive Replacement Cache Algorithm
Megiddo & Modha, 2004

Ask most people to name a cache management algorithm, and the first thing that springs to their mind is likely to be LRU. But it turns out there is a better (lesser-known) algorithm called ARC (Adaptive Replacement Cache), as used for example by OSv.

The adaptive replacement cache is a self-tuning, low-overhead algorithm that responds online to changing access patterns. ARC continually balances between the recency and frequency features of the workload, demonstrating that adaptation eliminates the need for the work- load-specific pretuning that plagued many previous proposals to improve LRU.

Sounds good, but is it really complicated to implement?

Like LRU, ARC is easy to implement, and its running time per request is essentially independent of the cache size.

And of course what really matters is that:

ARC leads to substantial performance gains in terms of an improved hit ratio compared with LRU for a large range of cache sizes.

So how does it work?

ARC maintains two LRU pages lists: L1 maintains pages that have been seen only once, recently, while L2 maintains pages that have been seen at least twice, recently.

L1 captures recency while L2 captures frequency. Strive to keep the two lists roughly the same length…

This approach can even beat alternative algorithms that have been tuned to optimal settings based on historic workloads:

Surprisingly, ARC, which operates completely online, delivers performance comparable to several state-of-the-art cache-replacement policies, even when, with hindsight, these policies choose the best fixed values for their tuning parameters. ARC matches LRU’s ease of implementation, requiring only two LRU lists.

See the paper for full details and some performance comparison results.