LruCache源码解读

Lrucache是Android4.0以后提供的一个用于缓存的类,这个类抛弃了原有的softReference(软引用)形式,因为android4.0以后虚拟机更倾向与回收软引用,也就是一旦虚拟机发现软引用,就会进行回收,这使得软引用变得不再可靠。

Lrucache的源码并不长,内部是使用LinkedHashMap<K, V> map来存储缓存的键值对,下面我带大家一起看一下lrucache的源码,希望大家以后再使用lrucache时思路更加清晰。

要说明Lrucache,我们首先要看一下LinkedHashMap的规则。

文档说明:

Map 接口的哈希表和链接列表实现,具有可预知的迭代顺序。此实现与 HashMap 的不同之处在于,后者维护着一个运行于所有条目的双重链接列表。此链接列表定义了迭代顺序,该迭代顺序通常就是将键插入到映射中的顺序(插入顺序)。注意,如果在映射中重新插入 键,则插入顺序不受影响。(如果在调用m.put(k, v)m.containsKey(k) 返回了true,则调用时会将键k
重新插入到映射m 中。)

此实现可以让客户避免未指定的、由 HashMap(及 Hashtable)所提供的通常为杂乱无章的排序工作,同时无需增加与 TreeMap 相关的成本。使用它可以生成一个与原来顺序相同的映射副本,而与原映射的实现无关:

     void foo(Map m) {
         Map copy = new LinkedHashMap(m);
         ...
     }
 

如果模块通过输入得到一个映射,复制这个映射,然后返回由此副本确定其顺序的结果,这种情况下这项技术特别有用。(客户通常期望返回的内容与其出现的顺序相同。)

提供特殊的构造方法来创建链接哈希映射,该哈希映射的迭代顺序就是最后访问其条目的顺序,从近期访问最少到近期访问最多的顺序(访问顺序。这种映射很适合构建
LRU 缓存。

OK,注意上面的黑体字,linkedhashmap维护了这样的一个迭代顺序,一看上就应该意识到,这样的特性非常适合LRU缓存,我们来看它的构造函数

LinkedHashMap(int initialCapacity, float loadFactor, boolean accessOrder)
          构造一个带指定初始容量、加载因子和排序模式的空 LinkedHashMap 实例。

最后一个参数accessOrder设置为true,就可以设置其排序模式

有了是上面的基本知识,我们来理解Lrucache就不难了。

首先看构造方法:

public class LruCache<K, V> {
        private final LinkedHashMap<K, V> map;	

    /** Size of this cache in units. Not necessarily the number of elements. */
    private int size;//当前缓存大小
    private int maxSize;//最大缓存大小

    private int putCount;//成功put的数目
    private int createCount;//成功create的数目
    private int evictionCount;//成功移除的数目
    private int hitCount;//成功get数
    private int missCount;//失败get数

    /**
     * @param maxSize for caches that do not override {@link #sizeOf}, this is
     *     the maximum number of entries in the cache. For all other caches,
     *     this is the maximum sum of the sizes of the entries in this cache.
     */
	public LruCache(int maxSize) {
		if (maxSize <= 0) {
			throw new IllegalArgumentException("maxSize <= 0");
		}
		this.maxSize = maxSize;
		this.map = new LinkedHashMap<K, V>(0, 0.75f, true);
	}

类里面的属性非常直观,我加了注释。

构造方法里面,我们可以看到要设置一个maxSize最大容量参数,然后就是初始化一个LinkedHashMap

使用new LinkedHashMap<K, V>(0, 0.75f, true);这个构造函数的原因我们已经知道了。好,构造函数非常简单。

下面我们来看最重要的两个方法

/**
	 * Returns the value for {@code key} if it exists in the cache or can be
	 * created by {@code #create}. If a value was returned, it is moved to the
	 * head of the queue. This returns null if a value is not cached and cannot
	 * be created.
	 */
	public final V get(K key) {
		if (key == null) {
			throw new NullPointerException("key == null");
		}

		V mapValue;
		synchronized (this) {//锁,保证线程安全
			mapValue = map.get(key);
			if (mapValue != null) {//获取的值不为空说明获取成功,修改属性
				hitCount++;
				return mapValue;
			}
			missCount++;
		}

        /*
         * Attempt to create a value. This may take a long time, and the map
         * may be different when create() returns. If a conflicting value was
         * added to the map while create() was working, we leave that value in
         * the map and release the created value.
         */

	    V createdValue = create(key);//默认返回null,子类
	    if (createdValue == null) {
			return null;
            }

            synchronized (this) {
            createCount++;
            mapValue = map.put(key, createdValue);

        if (mapValue != null) {
         // There was a conflict so undo that last put
                map.put(key, mapValue);
            } else {
                size += safeSizeOf(key, createdValue);
            }
        }

        if (mapValue != null) {
            entryRemoved(false, key, createdValue, mapValue);
            return mapValue;
        } else {
            trimToSize(maxSize);
            return createdValue;
        }

上面可以看出,我们简单的通过key来获取value,成功返回,不成功就先调用了create(key)方法,那么我们来看这个方法做了什么

另外值得一提的就是我们使用map.get方法时,是在同步块内的,这样会降低程序的效率(原则上来说,get方法是不应该加锁的,但是LinkedHashMap要维护一个Entry顺序链表,使得我们不得不对整个map加上锁,这可以说是Lrucache的一个缺点!)

/**
	 * Called after a cache miss to compute a value for the corresponding key.
	 * Returns the computed value or null if no value can be computed. The
	 * default implementation returns null.
	 *
	 * <p>The method is called without synchronization: other threads may
	 * access the cache while this method is executing.
	 *
	 * <p>If a value for {@code key} exists in the cache when this method
	 * returns, the created value will be released with {@link #entryRemoved}
	 * and discarded. This can occur when multiple threads request the same key
	 * at the same time (causing multiple values to be created), or when one
	 * thread calls {@link #put} while another is creating a value for the same
	 * key.
	 */
	protected V create(K key) {
		return null;
	}

结果发现是说了一大段话,却只返回了null!

OK,我们再在get()方法接着看

                V createdValue = create(key);
		if (createdValue == null) {
			return null;
		}

如果create(key)返回的值是null,这是get方法才返回null,那么什么时候create不是返回null呢

注意要create()是一个protected方法,说明我可以在子类中复写这个方法,我们可以根据自己的需要创建新的value(注意create方法不是线程安全的),当然可以默认返回null

假设我们在复写了create()方法,并且返回了value,才能接下去执行

               synchronized (this) {
			createCount++;//创建数加一
			mapValue = map.put(key, createdValue);//试图存入map中

			 if (mapValue != null) {//是否存在旧值
				// There was a conflict so undo that last put
				map.put(key, mapValue);//取消更新
			} else {
				size += safeSizeOf(key, createdValue);
			}
		}

map.put返回的mapValue是map中原本存储的值(如果原本map中没有这个key-value对,就返回空,如果原本就有,返回原本的值)

如果原本就有值,取消新值的写入,否则存储成功,更新size

接着我们看怎么更新size的

        private int safeSizeOf(K key, V value) {
		int result = sizeOf(key, value);
		if (result < 0) {
			throw new IllegalStateException("Negative size: " + key + "=" + value);
		}
		return result;
	}
        /**
        * Returns the size of the entry for {@code key} and {@code value} in
        * user-defined units.  The default implementation returns 1 so that size
        * is the number of entries and max size is the maximum number of entries.
        *
        * <p>An entry's size must not change while it is in the cache.
        */
        protected int sizeOf(K key, V value) {
             return 1;
        }

发现是默认返回1,所谓safeSizeOf不是指线程安全的意思,而是防止使用者错误将item大小设置为非正数(通过复写sizeOf方法)

get方法里面接下去看

if (mapValue != null) {
            entryRemoved(false, key, createdValue, mapValue);
            return mapValue;
        } else {
            trimToSize(maxSize);
            return createdValue;
        }

这里还是出来处理旧值的情况,如果存在旧值,调用entryRemoved(false, key, createdValue, mapValue);并且返回旧值,我们来看这个方法

/**
	 * Called for entries that have been evicted or removed. This method is
	 * invoked when a value is evicted to make space, removed by a call to
	 * {@link #remove}, or replaced by a call to {@link #put}. The default
	 * implementation does nothing.
	 *
	 * <p>The method is called without synchronization: other threads may
	 * access the cache while this method is executing.
	 *
	 * @param evicted true if the entry is being removed to make space, false
	 *     if the removal was caused by a {@link #put} or {@link #remove}.
	 * @param newValue the new value for {@code key}, if it exists. If non-null,
	 *     this removal was caused by a {@link #put}. Otherwise it was caused by
	 *     an eviction or a {@link #remove}.
	 */
	protected void entryRemoved(boolean evicted, K key, V oldValue, V newValue) {}

这个方法线程不安全,evicted参数表示是否被回收(true---为释放空间被删除;false---put或remove导致),接下来是key,旧值,新值。我们可以覆写这个方法

然后如果新值存储成功调用trimToSize()

         /**
	  * @param maxSize the maximum size of the cache before returning. May be -1
	  *     to evict even 0-sized elements.
	  */
	 private void trimToSize(int maxSize) {
 		while (true) {
 			K key;
 			V value;
 			synchronized (this) {
 				if (size < 0 || (map.isEmpty() && size != 0)) {
					throw new IllegalStateException(getClass().getName()
							+ ".sizeOf() is reporting inconsistent results!");
 				}

 				if (size <= maxSize || map.isEmpty()) {//如果没有达到最大缓存,结束
					   break;
 				}

 				Map.Entry<K, V> toEvict = map.entrySet().iterator().next();
 				key = toEvict.getKey();
				 value = toEvict.getValue();
 				map.remove(key);
 				size -= safeSizeOf(key, value);//清除缓存直到小于maxsize
				 evictionCount++;
 			}

 			entryRemoved(true, key, value, null);//注意我们每次清除都调用了这个方法,不过这个evicted为true
		}
	}

这个方法用于清空cache空间,也就是当超过maxsize时,用LRU算法腾出缓存空间

注意这里再while循环里面,目的是为了清除缓存直到小于maxsize

看完get方法,看put方法就容易了

         /**
	 * Caches {@code value} for {@code key}. The value is moved to the head of
	 * the queue.
	 *
	 * @return the previous value mapped by {@code key}.
	 */
	public final V put(K key, V value) {
		if (key == null || value == null) {
			throw new NullPointerException("key == null || value == null");
		}

		V previous;
		synchronized (this) {
			putCount++;
			size += safeSizeOf(key, value);
			previous = map.put(key, value);
			if (previous != null) {//判断是否存在旧值,有则计算新的size
				size -= safeSizeOf(key, previous);
			}
		}

		if (previous != null) {
			entryRemoved(false, key, previous, value);
		}

		trimToSize(maxSize);//清理缓存
		return previous;
	}

现在看put方法感觉已经没有什么好说的了,里面的方法几乎都说明过了

OK,除了上述方法以外,Lrucahce里面还有一些get方法用来获取Lrucache的属性,就不仔细说了,另外还有一个值得注意的remove方法

/**
	 * Removes the entry for {@code key} if it exists.
	 *
	 * @return the previous value mapped by {@code key}.
	 */
	public final V remove(K key) {
		if (key == null) {
			throw new NullPointerException("key == null");
		}

		V previous;
		synchronized (this) {
			previous = map.remove(key);
			if (previous != null) {
				size -= safeSizeOf(key, previous);
			}
		}

		if (previous != null) {
			entryRemoved(false, key, previous, null);
		}

		return previous;
	}

通过上面的说明,我也可以一眼看出remove的执行过程,so easy!

Lrucahce源码就给大家介绍到这里,其实lrucache原理就是维护一个LinkedHashMap,然后put,get方法,每次都有调用trimToSize()方法清理缓存

由于LinkedHashMap最少使用先出的原则,我们也不必担心选择清理哪些旧缓存

最后贴上Lrucache完整源码

/*
 * Copyright (C) 2011 The Android Open Source Project
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
package com.duowan.mobile.netroid.cache;

import java.util.LinkedHashMap;
import java.util.Map;

/**
 * This class copy from android support v4.
 * Static library version of {@link android.util.LruCache}. Used to write apps
 * that run on API levels prior to 12. When running on API level 12 or above,
 * this implementation is still used; it does not try to switch to the
 * framework's implementation. See the framework SDK documentation for a class
 * overview.
 */
public class LruCache<K, V> {
	private final LinkedHashMap<K, V> map;

	/** Size of this cache in units. Not necessarily the number of elements. */
	private int size;
	private int maxSize;

	private int putCount;
	private int createCount;
	private int evictionCount;
	private int hitCount;
	private int missCount;

	/**
	 * @param maxSize for caches that do not override {@link #sizeOf}, this is
	 *     the maximum number of entries in the cache. For all other caches,
	 *     this is the maximum sum of the sizes of the entries in this cache.
	 */
	public LruCache(int maxSize) {
		if (maxSize <= 0) {
			throw new IllegalArgumentException("maxSize <= 0");
		}
		this.maxSize = maxSize;
		this.map = new LinkedHashMap<K, V>(0, 0.75f, true);
	}

	/**
	 * Returns the value for {@code key} if it exists in the cache or can be
	 * created by {@code #create}. If a value was returned, it is moved to the
	 * head of the queue. This returns null if a value is not cached and cannot
	 * be created.
	 */
	public final V get(K key) {
		if (key == null) {
			throw new NullPointerException("key == null");
		}

		V mapValue;
		synchronized (this) {
			mapValue = map.get(key);
			if (mapValue != null) {
				hitCount++;
				return mapValue;
			}
			missCount++;
		}

        /*
         * Attempt to create a value. This may take a long time, and the map
         * may be different when create() returns. If a conflicting value was
         * added to the map while create() was working, we leave that value in
         * the map and release the created value.
         */

		V createdValue = create(key);
		if (createdValue == null) {
			return null;
		}

		synchronized (this) {
			createCount++;
			mapValue = map.put(key, createdValue);

			if (mapValue != null) {
				// There was a conflict so undo that last put
				map.put(key, mapValue);
			} else {
				size += safeSizeOf(key, createdValue);
			}
		}

		if (mapValue != null) {
			entryRemoved(false, key, createdValue, mapValue);
			return mapValue;
		} else {
			trimToSize(maxSize);
			return createdValue;
		}
	}

	/**
	 * Caches {@code value} for {@code key}. The value is moved to the head of
	 * the queue.
	 *
	 * @return the previous value mapped by {@code key}.
	 */
	public final V put(K key, V value) {
		if (key == null || value == null) {
			throw new NullPointerException("key == null || value == null");
		}

		V previous;
		synchronized (this) {
			putCount++;
			size += safeSizeOf(key, value);
			previous = map.put(key, value);
			if (previous != null) {
				size -= safeSizeOf(key, previous);
			}
		}

		if (previous != null) {
			entryRemoved(false, key, previous, value);
		}

		trimToSize(maxSize);
		return previous;
	}

	/**
	 * @param maxSize the maximum size of the cache before returning. May be -1
	 *     to evict even 0-sized elements.
	 */
	private void trimToSize(int maxSize) {
		while (true) {
			K key;
			V value;
			synchronized (this) {
				if (size < 0 || (map.isEmpty() && size != 0)) {
					throw new IllegalStateException(getClass().getName()
							+ ".sizeOf() is reporting inconsistent results!");
				}

				if (size <= maxSize || map.isEmpty()) {
					break;
				}

				Map.Entry<K, V> toEvict = map.entrySet().iterator().next();
				key = toEvict.getKey();
				value = toEvict.getValue();
				map.remove(key);
				size -= safeSizeOf(key, value);
				evictionCount++;
			}

			entryRemoved(true, key, value, null);
		}
	}

	/**
	 * Removes the entry for {@code key} if it exists.
	 *
	 * @return the previous value mapped by {@code key}.
	 */
	public final V remove(K key) {
		if (key == null) {
			throw new NullPointerException("key == null");
		}

		V previous;
		synchronized (this) {
			previous = map.remove(key);
			if (previous != null) {
				size -= safeSizeOf(key, previous);
			}
		}

		if (previous != null) {
			entryRemoved(false, key, previous, null);
		}

		return previous;
	}

	/**
	 * Called for entries that have been evicted or removed. This method is
	 * invoked when a value is evicted to make space, removed by a call to
	 * {@link #remove}, or replaced by a call to {@link #put}. The default
	 * implementation does nothing.
	 *
	 * <p>The method is called without synchronization: other threads may
	 * access the cache while this method is executing.
	 *
	 * @param evicted true if the entry is being removed to make space, false
	 *     if the removal was caused by a {@link #put} or {@link #remove}.
	 * @param newValue the new value for {@code key}, if it exists. If non-null,
	 *     this removal was caused by a {@link #put}. Otherwise it was caused by
	 *     an eviction or a {@link #remove}.
	 */
	protected void entryRemoved(boolean evicted, K key, V oldValue, V newValue) {}

	/**
	 * Called after a cache miss to compute a value for the corresponding key.
	 * Returns the computed value or null if no value can be computed. The
	 * default implementation returns null.
	 *
	 * <p>The method is called without synchronization: other threads may
	 * access the cache while this method is executing.
	 *
	 * <p>If a value for {@code key} exists in the cache when this method
	 * returns, the created value will be released with {@link #entryRemoved}
	 * and discarded. This can occur when multiple threads request the same key
	 * at the same time (causing multiple values to be created), or when one
	 * thread calls {@link #put} while another is creating a value for the same
	 * key.
	 */
	protected V create(K key) {
		return null;
	}

	private int safeSizeOf(K key, V value) {
		int result = sizeOf(key, value);
		if (result < 0) {
			throw new IllegalStateException("Negative size: " + key + "=" + value);
		}
		return result;
	}

	/**
	 * Returns the size of the entry for {@code key} and {@code value} in
	 * user-defined units.  The default implementation returns 1 so that size
	 * is the number of entries and max size is the maximum number of entries.
	 *
	 * <p>An entry's size must not change while it is in the cache.
	 */
	protected int sizeOf(K key, V value) {
		return 1;
	}

	/**
	 * Clear the cache, calling {@link #entryRemoved} on each removed entry.
	 */
	public final void evictAll() {
		trimToSize(-1); // -1 will evict 0-sized elements
	}

	/**
	 * For caches that do not override {@link #sizeOf}, this returns the number
	 * of entries in the cache. For all other caches, this returns the sum of
	 * the sizes of the entries in this cache.
	 */
	public synchronized final int size() {
		return size;
	}

	/**
	 * For caches that do not override {@link #sizeOf}, this returns the maximum
	 * number of entries in the cache. For all other caches, this returns the
	 * maximum sum of the sizes of the entries in this cache.
	 */
	public synchronized final int maxSize() {
		return maxSize;
	}

	/**
	 * Returns the number of times {@link #get} returned a value.
	 */
	public synchronized final int hitCount() {
		return hitCount;
	}

	/**
	 * Returns the number of times {@link #get} returned null or required a new
	 * value to be created.
	 */
	public synchronized final int missCount() {
		return missCount;
	}

	/**
	 * Returns the number of times {@link #create(Object)} returned a value.
	 */
	public synchronized final int createCount() {
		return createCount;
	}

	/**
	 * Returns the number of times {@link #put} was called.
	 */
	public synchronized final int putCount() {
		return putCount;
	}

	/**
	 * Returns the number of values that have been evicted.
	 */
	public synchronized final int evictionCount() {
		return evictionCount;
	}

	/**
	 * Returns a copy of the current contents of the cache, ordered from least
	 * recently accessed to most recently accessed.
	 */
	public synchronized final Map<K, V> snapshot() {
		return new LinkedHashMap<K, V>(map);
	}

	@Override public synchronized final String toString() {
		int accesses = hitCount + missCount;
		int hitPercent = accesses != 0 ? (100 * hitCount / accesses) : 0;
		return String.format("LruCache[maxSize=%d,hits=%d,misses=%d,hitRate=%d%%]",
				maxSize, hitCount, missCount, hitPercent);
	}
}
时间: 2024-10-14 07:09:16

LruCache源码解读的相关文章

Android-Universal-Image-Loader 源码解读

Universal-Image-Loader是一个强大而又灵活的用于加载.缓存.显示图片的Android库.它提供了大量的配置选项,使用起来非常方便. 基本概念 基本使用 首次配置 在第一次使用ImageLoader时,必须初始化一个全局配置,一般会选择在Application中配置. public class MyApplication extends Application { @Override public void onCreate() { super.onCreate(); //为I

Picasso 源码解读

基本概念 使用说明 Picasso,一个强大的图片下载与缓存库,出自Square公司.基本使用如下: Picasso.with(context).load(R.drawable.landing_screen).into(imageView1); Picasso.with(context).load("file:///android_asset/DvpvklR.png").into(imageView2); Picasso.with(context).load(new File(...)

QCustomplot使用分享(二) 源码解读

一.头文件概述 从这篇文章开始,我们将正式的进入到QCustomPlot的实践学习中来,首先我们先来学习下QCustomPlot的类图,如果下载了QCustomPlot源码的同学可以自己去QCustomPlot的目录下documentation/qcustomplot下寻找一个名字叫做index.html的文件,将其在浏览器中打开,也是可以找到这个库的类图.如图1所示,是组成一个QCustomPlot类图的可能组成形式. 一个图表(QCustomPlot):包含一个或者多个图层.一个或多个ite

vue源码解读预热-0

vueJS的源码解读 vue源码总共包含约一万行代码量(包括注释)特别感谢作者Evan You开放的源代码,访问地址为Github 代码整体介绍与函数介绍预览 代码模块分析 代码整体思路 总体的分析 从图片中可以看出的为采用IIFE(Immediately-Invoked Function Expression)立即执行的函数表达式的形式进行的代码的编写 常见的几种插件方式: (function(,){}(,))或(function(,){})(,)或!function(){}()等等,其中必有

SpringMVC源码解读 - RequestMapping注解实现解读 - RequestCondition体系

一般我们开发时,使用最多的还是@RequestMapping注解方式. @RequestMapping(value = "/", param = "role=guest", consumes = "!application/json") public void myHtmlService() { // ... } 台前的是RequestMapping ,正经干活的却是RequestCondition,根据配置的不同条件匹配request. @Re

jdk1.8.0_45源码解读——HashMap的实现

jdk1.8.0_45源码解读——HashMap的实现 一.HashMap概述 HashMap是基于哈希表的Map接口实现的,此实现提供所有可选的映射操作.存储的是<key,value>对的映射,允许多个null值和一个null键.但此类不保证映射的顺序,特别是它不保证该顺序恒久不变.  除了HashMap是非同步以及允许使用null外,HashMap 类与 Hashtable大致相同. 此实现假定哈希函数将元素适当地分布在各桶之间,可为基本操作(get 和 put)提供稳定的性能.迭代col

15、Spark Streaming源码解读之No Receivers彻底思考

在前几期文章里讲了带Receiver的Spark Streaming 应用的相关源码解读,但是现在开发Spark Streaming的应用越来越多的采用No Receivers(Direct Approach)的方式,No Receiver的方式的优势: 1. 更强的控制自由度 2. 语义一致性 其实No Receivers的方式更符合我们读取数据,操作数据的思路的.因为Spark 本身是一个计算框架,他底层会有数据来源,如果没有Receivers,我们直接操作数据来源,这其实是一种更自然的方式

jdk1.8.0_45源码解读——Set接口和AbstractSet抽象类的实现

jdk1.8.0_45源码解读——Set接口和AbstractSet抽象类的实现 一. Set架构 如上图: (01) Set 是继承于Collection的接口.它是一个不允许有重复元素的集合.(02) AbstractSet 是一个抽象类,它继承于AbstractCollection.AbstractCollection实现了Set中的绝大部分函数,为Set的实现类提供了便利.(03) HastSet 和 TreeSet 是Set的两个实现类.        HashSet依赖于HashMa

线程本地变量ThreadLocal源码解读

  一.ThreadLocal基础知识   原始线程现状: 按照传统经验,如果某个对象是非线程安全的,在多线程环境下,对对象的访问必须采用synchronized进行线程同步.但是Spring中的各种模板类并未采用线程同步机制,因为线程同步会影响并发性和系统性能,而且实现难度也不小. ThreadLocal在Spring中发挥着重要的作用.在管理request作用域的bean,事务管理,任务调度,AOP等模块中都出现了它的身影. ThreadLocal介绍: 它不是一个线程,而是线程的一个本地化