It costs a long time for CPUs to access data from memory, so most of CPUs have caches where requests for data can be served faster. To be cost-effective and to enable efficient use of data, caches must be relatively small. Therefore, we must use some policies to choose some data wisely and storage them in the cache.
We divide the memory into many blocks which have the same size and index them from 1 to

, and every block has an unique index. A cache will have a capacity for

blocks, which means it can storage at most

blocks simultaneously.
A
cache hit occurs when the requested block is available in the cache, or we say a cache miss occurs.
Now we introduce a
LRU (Least Recently Used) placement policy on a fully associative cache.
- If the requested block is available in the cache, a cache hit occurs.
- If not, CPU can only access the block from the memory and write the block into the cache. If cache is not full, append the block into the cache.
- If the cache is full, cache is full, the block which haven't been visited for the longest time in the cache will be replaced by the new block.
An example for cache with capacity of

blocks is shown below.
The

,

and

are in the cache when the

block is requested, so a cache miss occurs. At that time, the cache is full, and we must decide the block to be replaced. The most recent request of

block is the

request, the most recent request of

block is the

request and the most recent request of

block is the

request. The

block will be replaced by the

block because the

block hasn't been visited for longest time.
Now the sequence of requested blocks and the capacity of the cache are given, please determine the minimum capacity for the cache in order to ensure at least K requests to hit the cache.