一、Volley框架图
根据图简单猜测Volley工作的流程,见右下角的注释,蓝色表示主线程(main thread),绿色表示缓存线程(cache thread),黄色表示网络线程(network threads);
再寻找图中的关键字:queue(RequestQueue),cache queue,CacheDispatcher,NetworkDispatcher;
流程可简单那描述为:RequestQueue的add()操作将Request添加到缓存队列cache queue中。CacheDispatcher将Request从queue中取出,如果发现缓存中已经保存了相应的结果,则直接从缓存中读取并解析,将response结果回调给主线程。如果缓存中未发现,则将Request添加到网络队列中,进行相应的HTTP
transaction等事务处理,将网络请求的结果返回给主线程。
二、源码分析:
由上图可以得出流程图的入口在于RequestQueue的add()方法,先从RequestQueue的创建看起:
(一)RequestQueue的使用:
RequestQueue mRequestQueue = Volley.newRequestQueue(this);
看一下Volley.newRequestQueue的事务逻辑,Volley类中总共就两个方法:
/** * Creates a default instance of the worker pool and calls {@link RequestQueue#start()} on it. */ public static RequestQueue newRequestQueue(Context context) { return newRequestQueue(context, null); }
代码的事务主体在这里:
/** Default on-disk cache directory. */ private static final String DEFAULT_CACHE_DIR = "volley"; /** * Creates a default instance of the worker pool and calls {@link RequestQueue#start()} on it. * * @param context A {@link Context} to use for creating the cache dir. * @param stack An {@link HttpStack} to use for the network, or null for default. * @return A started {@link RequestQueue} instance. */ public static RequestQueue newRequestQueue(Context context, HttpStack stack) { //创建cache File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR); String userAgent = "volley/0"; try { String packageName = context.getPackageName(); PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0); userAgent = packageName + "/" + info.versionCode; } catch (NameNotFoundException e) { } /** 根据博文http://blog.csdn.net/guolin_blog/article/details/12452307,HurlStack是用HttpURLConnection实现的; HttpClintStack是由HttpClient实现的;由Android2.3之前的版本宜使用HttpClient,因为其Bug较少; Android2.3之后版本宜使用HttpURLConnection,因其较轻量级且API简单; 故会有此HurlStack和HttpURLConnection的使用分类 */ if (stack == null) { if (Build.VERSION.SDK_INT >= 9) { stack = new HurlStack(); } else { stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent)); } } //创建以stack为参数的Network对象 Network network = new BasicNetwork(stack); //创建RequestQueue对象 RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network); queue.start();//继续向下分析的入口 return queue; }
附I) 、HurlStack中的部分代码,可以看出其是基于HttpURLClient实现的:
private static HttpEntity entityFromConnection(HttpURLConnection connection)
对应的HttpClientStack的构造函数可以看出其实基于HttpClient实现的:
public HttpClientStack(HttpClient client) { mClient = client; }
而两者都是基于HttpStack接口的:
/** An HTTP stack abstraction.*/ public interface HttpStack { public HttpResponse performRequest(Request<?> request, Map<String, String> additionalHeaders) throws IOException, AuthFailureError; }
由于Android 2.3版本之前,因为HttpURLConnection的BUG较多,HttpClient的API已经较完备,故宜使用HttpClient,故这里版本9之前,选择使用HttpClientStack;
Android2.3之后版本,HttpURLConnection不断发展,因其较为轻量级,且API使用较为简单,其也在不断优化性能等,故这里使用基于其的HurlStack;
附II)、 这里引出一个Network对象,看一下构造函数,其用以处理stack传来的网络请求,与主线关系不大,可以不看
/** * A network performing Volley requests over an {@link HttpStack}. */ public class BasicNetwork implements Network { ... private static int DEFAULT_POOL_SIZE = 4096; protected final HttpStack mHttpStack; protected final ByteArrayPool mPool; public BasicNetwork(HttpStack httpStack) { // If a pool isn't passed in, then build a small default pool that will give us a lot of // benefit and not use too much memory. this(httpStack, new ByteArrayPool(DEFAULT_POOL_SIZE)); } /** * @param httpStack HTTP stack to be used * @param pool a buffer pool that improves GC performance in copy operations */ public BasicNetwork(HttpStack httpStack, ByteArrayPool pool) { mHttpStack = httpStack; mPool = pool; } ... }
保存了创建的stack,并创建一个字节数组池(ByteArrayPool)
附III)、 回到重要的RequestQueue,其构造函数:
/** Number of network request dispatcher threads to start. */ private static final int DEFAULT_NETWORK_THREAD_POOL_SIZE = 4; public RequestQueue(Cache cache, Network network) { this(cache, network, DEFAULT_NETWORK_THREAD_POOL_SIZE); } public RequestQueue(Cache cache, Network network, int threadPoolSize) { this(cache, network, threadPoolSize, new ExecutorDelivery(new Handler(Looper.getMainLooper()))); } /** * Creates the worker pool. Processing will not begin until {@link #start()} is called. * * @param cache A Cache to use for persisting responses to disk * @param network A Network interface for performing HTTP requests * @param threadPoolSize Number of network dispatcher threads to create * @param delivery A ResponseDelivery interface for posting responses and errors */ public RequestQueue(Cache cache, Network network, int threadPoolSize, ResponseDelivery delivery) { mCache = cache; mNetwork = network; mDispatchers = new NetworkDispatcher[threadPoolSize]; mDelivery = delivery; }
在这里创建了之前分析中一个重要的对象:NetworkDispatcher;并且可以看到其类似线程池似的,创建了大小为threadPoolSize的NetworkDispatcher数组;其中的处理逻辑暂且不看,首先可以知道其是一个线程:
public class NetworkDispatcher extends Thread
总结第一部RequestQueue中add方法所作的工作:
1)创建了Cache;
2)创建了HttpStack,并由HttpStack为基创建了Network对象;
3)创建RequestQueue对象,并在RequestQueue构造函数中创建了大小为threadPoolSize的NetworkDispatcher数组(注并未创建相应NetworkDispatcher对象)
4)调用RequestQueue.start()函数
(二)从start方法看起:
1、RequestQueue.start():
/** * Starts the dispatchers in this queue. */ publicvoid start() { stop(); // Make sure any currently running dispatchers are stopped. // Create the cache dispatcher and start it. mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery); mCacheDispatcher.start(); // Create network dispatchers (and corresponding threads) up to the pool size. for (int i = 0; i < mDispatchers.length; i++) { NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork, mCache, mDelivery); mDispatchers[i] = networkDispatcher; networkDispatcher.start(); } } /** Stops the cache and network dispatchers.*/ public void stop() { if (mCacheDispatcher != null) { mCacheDispatcher.quit(); } for (int i = 0; i < mDispatchers.length; i++) { if (mDispatchers[i] != null) { mDispatchers[i].quit(); } } }
start()依然在做初始化,可以看到创建了一个CacheDispatcher线程(它也是继承Thread的);又创建了threadPoolSize(默认为4)个NetworkDispatcher线程;则start()后加上主线程,一共有六个线程在运行;回顾之前的流程图,黄色、绿色、蓝色对应的线程都已集齐;黄色线程和绿色线程运行下后台一直在等待网络Request并进行dispatch;
则下面学习的主体落到了两个主要的处理线程CacheDispatcher和NetworkDispathcer上来;试了下,直接看源代码有些困难;先把之前使用Volley的流程走一遍;创建好RequestQueue之后,是创建自己的Request,前面文章已经做了学习;而后是将request通过RequestQueue的add()方法添加进来;
2、下面看一下RequestQueue.add()方法,它是前面流程图运行的入口函数:
/** * The set of all requests currently being processed by this RequestQueue. A Request * will be in this set if it is waiting in any queue or currently being processed by any dispatcher. */ private final Set<Request> mCurrentRequests = new HashSet<Request>(); /** * Adds a Request to the dispatch queue. * @param request The request to service * @return The passed-in request */ public Request add(Request request) { // Tag the request as belonging to this queue and add it to the set of current requests. request.setRequestQueue(this); //见附I Request设置其对应的RequestQueue synchronized (mCurrentRequests) { //mCurrentRequests表示当前该RequestQueue持有的requests,由HashSet来保存 mCurrentRequests.add(request); } // 为新添加的request进行一系列的初始化设置 request.setSequence(getSequenceNumber()); request.addMarker("add-to-queue"); // 见附II 判断request是否允许缓存 if (!request.shouldCache()) { mNetworkQueue.add(request); return request; } //request如果允许缓存 //Insert request into stage if there's already a request with the same cache key in flight. synchronized (mWaitingRequests) { // 见附III String cacheKey = request.getCacheKey(); if (mWaitingRequests.containsKey(cacheKey)) { // There is already a request in flight. Queue up. Queue<Request> stagedRequests = mWaitingRequests.get(cacheKey); if (stagedRequests == null) { stagedRequests = new LinkedList<Request>(); } stagedRequests.add(request); mWaitingRequests.put(cacheKey, stagedRequests); if (VolleyLog.DEBUG) { VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey); } } else { // Insert 'null' queue for this cacheKey, indicating there is now a request in // flight. mWaitingRequests.put(cacheKey, null); mCacheQueue.add(request); } return request; } }
附I)、Request.setRequestQueue() 字面上可以看出是Request设置其对应的RequestQueue,简单的setter函数:
/** The request queue this request is associated with. */ private RequestQueue mRequestQueue; /** * Associates this request with the given queue. The request queue will be notified when this * request has finished. */ public void setRequestQueue(RequestQueue requestQueue) { mRequestQueue = requestQueue; }
附II)、request.shouldCache()用以判断该request是否允许缓存(默认允许,可使用setShouldCache(false)来禁止缓存);如果不允许缓存,则直接将其添加到mNetworkQueue中返回。
/** The queue of requests that are actually going out to the network. */ private final PriorityBlockingQueue<Request> mNetworkQueue = new PriorityBlockingQueue<Request>();
RequetQueue其实并不是一个真正的Queue,真正存储Request供处理线程去读取和操作的Queue是mNetworkQueue,其类型是PriorityBlockingQueue;
附III)、mWaitingRequests
/** * Staging area for requests that already have a duplicate request in flight. * <ul> * <li>containsKey(cacheKey) indicates that there is a request in flight for the given cache * key.</li> * <li>get(cacheKey) returns waiting requests for the given cache key. The in flight request * is <em>not</em> contained in that list. Is null if no requests are staged.</li> * </ul> */ private final Map<String, Queue<Request>> mWaitingRequests = new HashMap<String, Queue<Request>>();
这个变量和缓存策略相关
函数:containsKey(cacheKey): true表明对于给定的cache key,已经存在了一个request
get(cacheKey) : 返回对于给定cache key对应的waiting requests,即Queue<Request>
其存储request的整个工作流程为:
1)对于每个新add的request,先获取它的CacheKey;
2)如果当前mWaitingRequests不存在当前cachekey,则会put(cacheKey, null);null表示当前Map中已经存在了一个对应cacheKey的请求;
3)如果mWaitingRequests已经存在了对应的cacheKey,通过get(Key)获取cacheKey对应的Queue;如果Queue为null,由第二步知,当前cacheKey仅仅对应一个request,则新建对应的Map
Value值——Queue<Request>(这里由LinkedList来实现),然后添加进去即可;
附I)、Request.setRequestQueue() 字面上可以看出是Request设置其对应的RequestQueue,简单的setter函数:
附II)、request.shouldCache()用以判断该request是否允许缓存(默认允许,可使用setShouldCache(false)来禁止缓存);如果不允许缓存,则直接将其添加到mNetworkQueue中返回。