Design a Cache System

Design a Cache System

Similar to our previous posts, we would like to select system design interview questions that are popular and practical so that not only can you get ideas about how to analyze problems in an interview, but learn something interesting at the same time.

If you have no idea about system design interviews, I’d recommend you read this tutorial first. In this post, we are addressing the problem – how to design a cache system. Topics covered by this post include:

  • LRU cache
  • Eviction policy,
  • Cache concurrency
  • Distributed cache system

Problem

How to design a cache system?

Cache system is a widely adopted technique in almost every applications today. In addition, it applies to every layer of the technology stack. For instance, at network area cache is used in DNS lookup and in web server cache is used for frequent requests.

In short, a cache system stores common used resources (maybe in memory) and when next time someone requests the same resource, the system can return immediately. It increases the system efficiency by consuming more storage space.

LRU

One of the most common cache systems is LRU (least recently used). In fact, another common interview question is to discuss data structures and design of an LRU cache. Let’s start with this approach.

The way LRU cache works is quite simple. When the client requests resource A, it happens as follow:

  • If A exists in the cache, we just return immediately.
  • If not and the cache has extra storage slots, we fetch resource A and return to the client. In addition, insert A into the cache.
  • If the cache is full, we kick out the resource that is least recently used and replace it with resource A.

The strategy here is to maximum the chance that the requesting resource exists in the cache. So how can we implement a simple LRU?

LRU design

An LRU cache should support the operations: lookup, insert and delete. Apparently, in order to achieve fast lookup, we need to use hash. By the same token, if we want to make insert/delete fast, something like linked list should come to your mind. Since we need to locate the least recently used item efficiently, we need something in order like queue, stack or sorted array.

To combine all these analyses, we can use queue implemented by a doubly linked list to store all the resources. Also, a hash table with resource identifier as key and address of the corresponding queue node as value is needed.

Here’s how it works. when resource A is requested, we check the hash table to see if A exists in the cache. If exists, we can immediately locate the corresponding queue node and return the resource. If not, we’ll add A into the cache. If there are enough space, we just add a to the end of the queue and update the hash table. Otherwise, we need to delete the least recently used entry. To do that, we can easily remove the head of the queue and the corresponding entry from the hash table.

Eviction policy

When the cache is full, we need to remove existing items for new resources. In fact, deleting the least recently used item is just one of the most common approaches. So are there other ways to do that?

As mentioned above, The strategy is to maximum the chance that the requesting resource exists in the cache. I’ll briefly mention several approaches here:

  • Random Replacement (RR) – As the term suggests, we can just randomly delete an entry.
  • Least frequently used (LFU) – We keep the count of how frequent each item is requested and delete the one least frequently used.
  • W-TinyLFU – I’d also like to talk about this modern eviction policy. In a nutshell, the problem of LFU is that sometimes an item is only used frequently in the past, but LFU will still keep this item for a long while. W-TinyLFU solves this problem by calculating frequency within a time window. It also has various optimizations of storage.

Concurrency

To discuss concurrency, I’d like to talk about why there is concurrency issue with cache and how can we address it.

It falls into the classic reader-writer problem. When multiple clients are trying to update the cache at the same time, there can be conflicts. For instance, two clients may compete for the same cache slot and the one who updates the cache last wins.

The common solution of course is using a lock. The downside is obvious – it affects the performance a lot. How can we optimize this?

One approach is to split the cache into multiple shards and have a lock for each of them so that clients won’t wait for each other if they are updating cache from different shards. However, given that hot entries are more likely to be visited, certain shards will be more often locked than others.

An alternative is to use commit logs. To update the cache, we can store all the mutations into logs rather than update immediately. And then some background processes will execute all the logs asynchronously. This strategy is commonly adopted in database design.

Distributed cache

When the system gets to certain scale, we need to distribute the cache to multiple machines.

The general strategy is to keep a hash table that maps each resource to the corresponding machine. Therefore, when requesting resource A, from this hash table we know that machine M is responsible for cache A and direct the request to M. At machine M, it works similar to local cache discussed above. Machine M may need to fetch and update the cache for A if it doesn’t exist in memory. After that, it returns the cache back to the original server.

If you are interested in this topic, you can check more about Memcached.

Summary

Cache can be a really interesting and practical topic as it’s used in almost every system nowadays. There are still many topics I’m not covering here like expiration policy.

If you want to know more about similar posts, check our system design interview questions collection.

The post is written by Gainlo - a platform that allows you to have mock interviews with employees from Google, Amazon etc..

原文地址:https://www.cnblogs.com/vicky-project/p/9176959.html

时间: 2024-08-29 10:50:19

Design a Cache System的相关文章

System design interview: how to design a feeds system (e.g., Twitter, Instagram and Facebook news feed)

System design interview: how to design a chat system (e.g., Messenger, WeChat or WhatsApp) Methodology: READ MF! Please use this "READ MF!" framework for software engineer system interview purpose. Key designs and terms So far the best detailed

Design Search Autocomplete System

Design a search autocomplete system for a search engine. Users may input a sentence (at least one word and end with a special character '#'). For each character they type except '#', you need to return the top 3historical hot sentences that have pref

JCS(Java Cache System)基本结构分析和使用

JCS(Java Caching System)项目: http://commons.apache.org/proper/commons-jcs/index.html JCS是用java编写的一个分布式缓存系统.它旨在通过提供一种手段来管理各种动态性质的缓存数据加快应用程序,它是一个复合式的缓冲工具,据说是超越简单的对象缓存,可以将对象缓冲到内存.硬盘(本地磁盘,网络地址磁盘,数据库),具有缓冲对象时间过期设定,还可以通过JCS构建具有缓冲的分布式构架,以实现高性能的应用. 对于一些需要频繁访问

[LeetCode] Design Log Storage System 设计日志存储系统

You are given several logs that each log contains a unique id and timestamp. Timestamp is a string that has the following format: Year:Month:Day:Hour:Minute:Second, for example, 2017:01:01:23:59:59. All domains are zero-padded decimal numbers. Design

[LeetCode] Design In-Memory File System 设计内存文件系统

Design an in-memory file system to simulate the following functions: ls: Given a path in string format. If it is a file path, return a list that only contains this file's name. If it is a directory path, return the list of file and directory names in

[LeetCode] Design Search Autocomplete System 设计搜索自动补全系统

Design a search autocomplete system for a search engine. Users may input a sentence (at least one word and end with a special character '#'). For each character they type except '#', you need to return the top 3historical hot sentences that have pref

642. Design Search Autocomplete System

问题描述: Design a search autocomplete system for a search engine. Users may input a sentence (at least one word and end with a special character '#'). For each character they type except '#', you need to return the top 3 historical hot sentences that ha

LeetCode 642. Design Search Autocomplete System

原题链接在这里:https://leetcode.com/problems/design-search-autocomplete-system/ 题目: Design a search autocomplete system for a search engine. Users may input a sentence (at least one word and end with a special character '#'). For each character they type ex

Method and apparatus for verification of coherence for shared cache components in a system verification environment

A method and apparatus for verification of coherence for shared cache components in a system verification environment are provided. With the method and apparatus, stores to the cache are applied to a cache functional simulator in the order that they