January 22, 2017

674 words 4 mins read

Lru Cache C++ Implementation

Lru Cache C++ Implementation

LRU, or Least Recetly Used, is one of the Page Replacement Algorithms, in which the system manages a given amount of memory - by making decisions what pages to keep in memory, and which ones to remove when the memory is full.

Update: I now have this article as a YouTube video tutorial also, where I talk about LRU, Data Structures and do hands on coding in Java (concept is pretty much same as C++). You can check out the video using below link:

[contd. for C++]

Let’s say, the capacity of a given cache (memory) is C.

Our memory stores key, value pairs in it.

It should support the following operations:

  • get(key) - Get the value of the given key if it exists in the memory (else, let’s say -1)

  • put(key, value) - Set, or insert the value if not present. If our cache reaches its capacity, we should remove the item which was least recently used.

Another constraint to the given problem is: Both the operations must be done in constant Time Complexity, ie in O(1).

Now, we need to think of some of the data structures, which would allow us to perform the above operations in O(1).

Choice of data structures

  • Queue - We should maintain a Queue (double ended queue), in which the most recently used pages (items) are in the front, and the least recently used pages are in the rear. This would allow to remove the least recently used item in O(1) time.

  • Doubly Linked List - We should implement our Queue using a doubly linked list (instead of arrays), which would allow us to apply shifting operations in O(1) time. (like, when we need to shift a page to the front of the queue)

  • HashMap - We should hash the key values to the location where the page is stored. This would allow get operation in O(1) time.

Design and Implementation

Now that we know which what all data structures to use, let’s look at the implementation.

Whenever a user gets a page, we return its value, and also move that page to the front of our Queue.

Whenever a user sets a page, if the page is already present, we update its value and move that page to the front of our Queue, else we add a new page to our cache in the front of the Queue. But if our cache has reached its capacity, we remove the least recently used page (ie the rear item in our Queue) from our memory.

  1. class Node
    1. key
    2. value
    3. next node address
    4. previous node address
  2. class DoublyLinkedList
  • Data members:
    1. front node address
    2. rear node address
  • Member functions:
    1. move_page_to_head()
    2. remove_rear_page()
    3. get_rear_page()
    4. add_page_to_head()
  1. class LRUCache
  • Data members:
    1. capacity
    2. current size
    3. a DoublyLinkedList object
    4. Hashmap
  • Member functions:
    1. get(key)
    2. put(key, value)

Let’s make the 3 classes.


Running the code

Save the above code in a file, say of name LRUCache.cpp.

In the same directory create another .cpp file in which we will use get() and put() functions of our LRU. Paste the code below, compile & run it:

#include <iostream>
#include "LRUCache.cpp"
using namespace std;

int main() {
	LRUCache cache(2);	// cache capacity 2
	cout << cache.get(2) << endl;
	cout << cache.get(1) << endl;
	cout << cache.get(1) << endl;
	cout << cache.get(2) << endl;
	cout << cache.get(1) << endl;
	cout << cache.get(8) << endl;




The output comes out to be correct (you can check by creating a cache of size 2, and executing the given get and put functions in the above order.)

That is all for LRU Cache implementation - ie, the “Least Recently Used Page replacement algorithm”.


Use unordered_map instead of ordered maps as used above (ie just map was used above) to make it really O(1). To read difference: unordered_map and map.

The LRU Cache problem is available on Leetcode at: LRU Cache if you want to check it out.

Any feedback, doubts or questions, please leave in the comments.

comments powered by Disqus