Android 中LruCache 原理与编程

Android用LruCache来取代原来强引用和软引用实现内存缓存,因为据说自2.3以后Android将更频繁的调用GC,导致软引用缓存的数据极易被释放。

LruCache使用一个LinkedHashMap简单的实现内存的缓存,没有软引用,都是强引用。根据LinkedHashMap的结构原理,最新的应该在尾端,旧的应该在头部。如果添加的数据大于设置的最大值,就删除最先缓存(头部)的数据来调整内存。他的主要原理在trimToSize方法中。需要了解两个主要的变量size和maxSize

maxSize是通过构造方法初始化的值,他表示这个缓存能缓存的最大值是多少。

size在添加和移除缓存都被更新值,他通过safeSizeOf这个方法更新值。safeSizeOf默认返回1,但一般我们会根据maxSize重写这个方法,比如认为maxSize代表是KB的话,那么就以KB为单位返回该项所占的内存大小。

除异常外首先会判断size是否超过maxSize,,如果超过了就取出最先插入的缓存,如果不为空就删掉(一般来说只要map不为空都不会返回null,因为他是个双休链表),并把size减去该项所占的大小。这个操作将一直循环下去,直到size比maxSize小或者缓存为空。

Android提供的LruCache类简介

 package android.util;  

import java.util.LinkedHashMap;
import java.util.Map;  

/**
 * A cache that holds strong references to a limited number of values. Each time
 * a value is accessed, it is moved to the head of a queue. When a value is
 * added to a full cache, the value at the end of that queue is evicted and may
 * become eligible for garbage collection.
 * Cache保存一个强引用来限制内容数量,每当Item被访问的时候,此Item就会移动到队列的头部。
 * 当cache已满的时候加入新的item时,在队列尾部的item会被回收。
 * <p>If your cached values hold resources that need to be explicitly released,
 * override {@link #entryRemoved}.
 * 如果你cache的某个值需要明确释放,重写entryRemoved()
 * <p>If a cache miss should be computed on demand for the corresponding keys,
 * override {@link #create}. This simplifies the calling code, allowing it to
 * assume a value will always be returned, even when there‘s a cache miss.
 * 如果key相对应的item丢掉啦,重写create().这简化了调用代码,即使丢失了也总会返回。
 * <p>By default, the cache size is measured in the number of entries. Override
 * {@link #sizeOf} to size the cache in different units. For example, this cache
 * is limited to 4MiB of bitmaps: 默认cache大小是测量的item的数量,重写sizeof计算不同item的
 *  大小。
 * <pre>   {@code
 *   int cacheSize = 4 * 1024 * 1024; // 4MiB
 *   LruCache<String, Bitmap> bitmapCache = new LruCache<String, Bitmap>(cacheSize) {
 *       protected int sizeOf(String key, Bitmap value) {
 *           return value.getByteCount();
 *       }
 *   }}</pre>
 *
 * <p>This class is thread-safe. Perform multiple cache operations atomically by
 * synchronizing on the cache: <pre>   {@code
 *   synchronized (cache) {
 *     if (cache.get(key) == null) {
 *         cache.put(key, value);
 *     }
 *   }}</pre>
 *
 * <p>This class does not allow null to be used as a key or value. A return
 * value of null from {@link #get}, {@link #put} or {@link #remove} is
 * unambiguous: the key was not in the cache.
 * 不允许key或者value为null
 *  当get(),put(),remove()返回值为null时,key相应的项不在cache中
 */
public class LruCache<K, V> {
    private final LinkedHashMap<K, V> map;  

    /** Size of this cache in units. Not necessarily the number of elements. */
    private int size; //已经存储的大小
    private int maxSize; //规定的最大存储空间

    private int putCount;  //put的次数
    private int createCount;  //create的次数
    private int evictionCount;  //回收的次数
    private int hitCount;  //命中的次数
    private int missCount;  //丢失的次数

    /**
     * @param maxSize for caches that do not override {@link #sizeOf}, this is
     *     the maximum number of entries in the cache. For all other caches,
     *     this is the maximum sum of the sizes of the entries in this cache.
     */
    public LruCache(int maxSize) {
        if (maxSize <= 0) {
            throw new IllegalArgumentException("maxSize <= 0");
        }
        this.maxSize = maxSize;
        this.map = new LinkedHashMap<K, V>(0, 0.75f, true);
    }  

    /**
     * Returns the value for {@code key} if it exists in the cache or can be
     * created by {@code #create}. If a value was returned, it is moved to the
     * head of the queue. This returns null if a value is not cached and cannot
     * be created. 通过key返回相应的item,或者创建返回相应的item。相应的item会移动到队列的头部,
     * 如果item的value没有被cache或者不能被创建,则返回null。
     */
    public final V get(K key) {
        if (key == null) {
            throw new NullPointerException("key == null");
        }  

        V mapValue;
        synchronized (this) {
            mapValue = map.get(key);
            if (mapValue != null) {
                hitCount++;  //命中
                return mapValue;
            }
            missCount++;  //丢失
        }  

        /*
         * Attempt to create a value. This may take a long time, and the map
         * may be different when create() returns. If a conflicting value was
         * added to the map while create() was working, we leave that value in
         * the map and release the created value.
         * 如果丢失了就试图创建一个item
         */  

        V createdValue = create(key);
        if (createdValue == null) {
            return null;
        }  

        synchronized (this) {
            createCount++;//创建++
            mapValue = map.put(key, createdValue);  

            if (mapValue != null) {
                // There was a conflict so undo that last put
                //如果前面存在oldValue,那么撤销put()
                map.put(key, mapValue);
            } else {
                size += safeSizeOf(key, createdValue);
            }
        }  

        if (mapValue != null) {
            entryRemoved(false, key, createdValue, mapValue);
            return mapValue;
        } else {
            trimToSize(maxSize);
            return createdValue;
        }
    }  

    /**
     * Caches {@code value} for {@code key}. The value is moved to the head of
     * the queue.
     *
     * @return the previous value mapped by {@code key}.
     */
    public final V put(K key, V value) {
        if (key == null || value == null) {
            throw new NullPointerException("key == null || value == null");
        }  

        V previous;
        synchronized (this) {
            putCount++;
            size += safeSizeOf(key, value);
            previous = map.put(key, value);
            if (previous != null) {  //返回的先前的value值
                size -= safeSizeOf(key, previous);
            }
        }  

        if (previous != null) {
            entryRemoved(false, key, previous, value);
        }  

        trimToSize(maxSize);
        return previous;
    }  

    /**
     * @param maxSize the maximum size of the cache before returning. May be -1
     *     to evict even 0-sized elements.
     *  保证当前缓存的大小不大于maxSize!
     */
    private void trimToSize(int maxSize) {
        while (true) {
            K key;
            V value;
            synchronized (this) {
                if (size < 0 || (map.isEmpty() && size != 0)) {
                    throw new IllegalStateException(getClass().getName()
                            + ".sizeOf() is reporting inconsistent results!");
                }  

                if (size <= maxSize) {
                    break;
                }  

                Map.Entry<K, V> toEvict = map.eldest();
                if (toEvict == null) {
                    break;
                }  

                key = toEvict.getKey();
                value = toEvict.getValue();
                map.remove(key);
                size -= safeSizeOf(key, value);
                evictionCount++;
            }  

            entryRemoved(true, key, value, null);
        }
    }  

    /**
     * Removes the entry for {@code key} if it exists.
     * 删除key相应的cache项,返回相应的value
     * @return the previous value mapped by {@code key}.
     */
    public final V remove(K key) {
        if (key == null) {
            throw new NullPointerException("key == null");
        }  

        V previous;
        synchronized (this) {
            previous = map.remove(key);
            if (previous != null) {
                size -= safeSizeOf(key, previous);
            }
        }  

        if (previous != null) {
            entryRemoved(false, key, previous, null);
        }  

        return previous;
    }  

    /**
     * Called for entries that have been evicted or removed. This method is
     * invoked when a value is evicted to make space, removed by a call to
     * {@link #remove}, or replaced by a call to {@link #put}. The default
     * implementation does nothing.
     * 当item被回收或者删掉时调用。改方法当value被回收释放存储空间时被remove调用,
     * 或者替换item值时put调用,默认实现什么都没做。
     * <p>The method is called without synchronization: other threads may
     * access the cache while this method is executing.
     *
     * @param evicted true if the entry is being removed to make space, false
     *     if the removal was caused by a {@link #put} or {@link #remove}.
     * true---为释放空间被删除;false---put或remove导致
     * @param newValue the new value for {@code key}, if it exists. If non-null,
     *     this removal was caused by a {@link #put}. Otherwise it was caused by
     *     an eviction or a {@link #remove}.
     */
    protected void entryRemoved(boolean evicted, K key, V oldValue, V newValue) {}  

    /**
     * Called after a cache miss to compute a value for the corresponding key.
     * Returns the computed value or null if no value can be computed. The
     * default implementation returns null.
     * 当某Item丢失时会调用到,返回计算的相应的value或者null
     * <p>The method is called without synchronization: other threads may
     * access the cache while this method is executing.
     *
     * <p>If a value for {@code key} exists in the cache when this method
     * returns, the created value will be released with {@link #entryRemoved}
     * and discarded. This can occur when multiple threads request the same key
     * at the same time (causing multiple values to be created), or when one
     * thread calls {@link #put} while another is creating a value for the same
     * key.
     */
    protected V create(K key) {
        return null;
    }  

    private int safeSizeOf(K key, V value) {
        int result = sizeOf(key, value);
        if (result < 0) {
            throw new IllegalStateException("Negative size: " + key + "=" + value);
        }
        return result;
    }  

    /**
     * Returns the size of the entry for {@code key} and {@code value} in
     * user-defined units.  The default implementation returns 1 so that size
     * is the number of entries and max size is the maximum number of entries.
     * 返回用户定义的item的大小,默认返回1代表item的数量,最大size就是最大item值
     * <p>An entry‘s size must not change while it is in the cache.
     */
    protected int sizeOf(K key, V value) {
        return 1;
    }  

    /**
     * Clear the cache, calling {@link #entryRemoved} on each removed entry.
     * 清空cacke
     */
    public final void evictAll() {
        trimToSize(-1); // -1 will evict 0-sized elements
    }  

    /**
     * For caches that do not override {@link #sizeOf}, this returns the number
     * of entries in the cache. For all other caches, this returns the sum of
     * the sizes of the entries in this cache.
     */
    public synchronized final int size() {
        return size;
    }  

    /**
     * For caches that do not override {@link #sizeOf}, this returns the maximum
     * number of entries in the cache. For all other caches, this returns the
     * maximum sum of the sizes of the entries in this cache.
     */
    public synchronized final int maxSize() {
        return maxSize;
    }  

    /**
     * Returns the number of times {@link #get} returned a value that was
     * already present in the cache.
     */
    public synchronized final int hitCount() {
        return hitCount;
    }  

    /**
     * Returns the number of times {@link #get} returned null or required a new
     * value to be created.
     */
    public synchronized final int missCount() {
        return missCount;
    }  

    /**
     * Returns the number of times {@link #create(Object)} returned a value.
     */
    public synchronized final int createCount() {
        return createCount;
    }  

    /**
     * Returns the number of times {@link #put} was called.
     */
    public synchronized final int putCount() {
        return putCount;
    }  

    /**
     * Returns the number of values that have been evicted.
     * 返回被回收的数量
     */
    public synchronized final int evictionCount() {
        return evictionCount;
    }  

    /**
     * Returns a copy of the current contents of the cache, ordered from least
     * recently accessed to most recently accessed. 返回当前cache的副本,从最近最少访问到最多访问
     */
    public synchronized final Map<K, V> snapshot() {
        return new LinkedHashMap<K, V>(map);
    }  

    @Override public synchronized final String toString() {
        int accesses = hitCount + missCount;
        int hitPercent = accesses != 0 ? (100 * hitCount / accesses) : 0;
        return String.format("LruCache[maxSize=%d,hits=%d,misses=%d,hitRate=%d%%]",
                maxSize, hitCount, missCount, hitPercent);
    }
}  

第177行,Map.Entry< K, V> toEvict = map.eldest(); 这句中的eldest()方法是哪里来的?代码不是support-v4包里的LruCache而是android framework里面的LruCache,eldest是隐藏的API。

    /**
     * Returns the eldest entry in the map, or {@code null} if the map is empty.
     * @hide
     */
    public Entry<K, V> eldest() {
        LinkedEntry<K, V> eldest = header.nxt;
        return eldest != header ? eldest : null;
    }  

编程引用:

郭霖大师博文

- Android高效加载大图、多图解决方案,有效避免程序OOM

- Android照片墙应用实现,再多的图片也不怕崩溃

其他

- Android 异步加载图片,使用LruCache和SD卡或手机缓存,效果非常的流畅

- 图片缓存之内存缓存技术LruCache,软引用

版权声明:本文为博主原创文章,未经博主允许不得转载。

时间: 2024-10-13 08:34:56

Android 中LruCache 原理与编程的相关文章

异步消息处理机制-Android中Handler原理(续)

异步消息处理线程是指线程启动后会进入一个无限循环,每循环一次,从内部的消息队列里面取出一个消息,并回调相应的消息处理函数.一般在任务常驻,比如用户交互任务的情况下使用异步消息处理线程. 之前在Android中Handler原理里面研究过android里实现异步消息处理线程的方式,基本逻辑如图所示 今天就用java将其简单的模拟出来加深印象,下面的类图是用工具导出的,不太正规,不过能大概看出类之间的关系 Message类:消息类 public class Message { public int

RxJava入门系列四,Android中的响应式编程

RxJava入门系列四,Android中的响应式编程 在入门系列1,2,3中,我基本介绍了RxJava是如何使用的.但是作为一名Android开发人员,你怎么让RxJava能为你所用呢?这篇博客我将针对Android开发来介绍一下RxJava的使用场景. RxAndroid RxAndroid是为Android打造的RxJava扩展.通过RxAndroid可以让你的Android开发变得更轻松. 首先,RxAndroid中提供了AndroidSchedulers,你可以用它来切换Android线

Android中Handler原理

Handler主要是主线程和子线程通信.一般子线程中做一些耗时操作做完之后通知主线程来修改UI. 实际上android系统在Activity启动或者状态变化等都是通过Handler机制实现的. 首先进入到ActivityThread的main方法中 public static void main(String[] args) { -- Looper.prepareMainLooper(); ActivityThread thread = new ActivityThread(); thread.

深入浅出RxJava四-在Android中使用响应式编程

原文链接 在第1,2,3篇中,我大概介绍了RxJava是怎么使用的.下面我会介绍如何在Android中使用RxJava. RxAndroid RxAndroid是RxJava的一个针对Android平台的扩展.它包含了一些能够简化Android开发的工具. 首先,AndroidSchedulers提供了针对Android的线程系统的调度器.需要在UI线程中运行某些代码?很简单,只需要使用AndroidSchedulers.mainThread(): myImageView.setImageBit

简析Android中LruCache缓存类

/***************************************************  * TODO: description .  * @author: gao_chun  * @since:  2015-4-7  * @version: 1.0.0  * @remark: 转载请注明出处  **************************************************/ 内存缓存技术对那些大量占用应用程序宝贵内存的图片提供了快速访问的方法.其中最核心

Android 中LayoutInflater原理分析

概述 在Android开发中LayoutInflater的应用非常普遍,可以将res/layout/下的xml布局文件,实例化为一个View或者ViewGroup的控件.与findViewById的作用类似,但是findViewById在xml布局文件中查找具体的控件,两者并不完全相同. 应用场景: 1.在一个没有载入或者想要动态载入的界面中,需要使用layoutInflater.inflate()来载入布局文件: 2.对于一个已经载入的界面,就可以使用findViewById方法来获得其中的界

Android中的缓存处理

一.缓存介绍 (一).Android中缓存的必要性: 1.没有缓存的弊端: 流量开销:对于客户端--服务器端应用,从远程获取图片算是经常要用的一个功能,而图片资源往往会消耗比较大的流量. 加载速度:如果应用中图片加载速度很慢的话,那么用户体验会非常糟糕. 那么如何处理好图片资源的获取和管理呢?异步下载+本地缓存 2.缓存带来的好处: 1. 服务器的压力大大减小: 2. 客户端的响应速度大大变快(用户体验好): 3. 客户端的数据加载出错情况大大较少,大大提高了应有的稳定性(用户体验好): 4.

android 使用LruCache缓存网络图片

加载图片,图片如果达到一定的上限,如果没有一种合理的机制对图片进行释放必然会引起程序的崩溃. 为了避免这种情况,我们可以使用Android中LruCache来缓存下载的图片,防止程序出现OOM. 打开activity_main.xml作为程序的主布局,加入如下代码: <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.an

Android中插件开发篇总结和概述

刚刚终于写完了插件开发的最后一篇文章,下面就来总结一下,关于Android中插件篇从去年的11月份就开始规划了,主要从三个方面去解读Android中插件开发原理.说白了,插件开发的原理就是:动态加载技术.但是我们在开发插件的过程中可能会遇到很多问题,所以这里就分为三篇文章进行解读的,而且也是循序渐进,解决了插件开发过程中可能会遇到的问题,不过这三篇的基础还是动态加载技术. 第一.插件开发基础篇:动态加载技术解读 http://blog.csdn.net/jiangwei0910410003/ar