Volley源码分析【面向接口编程的典范】

基本原理

Volley采用生产者消费者模型,生产者(Volley的使用者)通过调用add方法给请求队列添加请求,缓存调度器和网络调度器作为消费者从请求队列取出请求处理,根据不同情况决定走缓存还是走网络请求数据,最后切换线程,将请求的数据回调给UI线程。

创建请求队列

Volley通过静态工厂方法newRequestQueue生成一个请求队列RequestQueue

    public static RequestQueue newRequestQueue(Context context, HttpStack stack) {
        File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);

        String userAgent = "volley/0";
        try {
            String packageName = context.getPackageName();
            PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);
            userAgent = packageName + "/" + info.versionCode;
        } catch (NameNotFoundException e) {
        }

        if (stack == null) {
            if (Build.VERSION.SDK_INT >= 9) {
                stack = new HurlStack();
            } else {
                // Prior to Gingerbread, HttpUrlConnection was unreliable.
                // See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
                stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
            }
        }

        Network network = new BasicNetwork(stack);

        RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
        queue.start();

        return queue;
    }

根据应用的包名和版本号创建了userAgent

根据android的版本号创建http栈HttpStack

依据http栈创建一个BasicNetwork对象,接着创建一个DiskBasedCache对象

有了缓存和网路,就可以创建请求队列RequestQueue了

   RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);

然后调用请求队列的start方法启动缓存调度器和网络调度器,消费者开始工作,不停地从请求队列中取出请求处理,请求队列里边没有请求就阻塞。

启动缓存调度器和网络调度器

    public void start() {
        stop();  // Make sure any currently running dispatchers are stopped.
        // Create the cache dispatcher and start it.
        mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
        mCacheDispatcher.start();

        // Create network dispatchers (and corresponding threads) up to the pool size.
        for (int i = 0; i < mDispatchers.length; i++) {
            NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
                    mCache, mDelivery);
            mDispatchers[i] = networkDispatcher;
            networkDispatcher.start();
        }
    }

可见请求队列的start方法启动了一个缓存调度器线程和若干个(默认4个)网络调度器线程

缓存调度器CacheDispatcher分析

缓存调度器继承自Thread,因此实际上是一个线程

public class CacheDispatcher extends Thread

CacheDispatcher充分体现了面向接口编程的精髓,CacheDispatcher所依赖的属性全部是接口,而不是具体的实现,通过构造函数进行以来注入

    /** The queue of requests coming in for triage. */
    private final BlockingQueue<Request<?>> mCacheQueue;

    /** The queue of requests going out to the network. */
    private final BlockingQueue<Request<?>> mNetworkQueue;

    /** The cache to read from. */
    private final Cache mCache;

    /** For posting responses. */
    private final ResponseDelivery mDelivery;

BlockingQueue、Cache、ResponseDelivery均为接口而不是具体的实现,这样降低了类之间的耦合,提高了编程的灵活性。

   public void run() {
        if (DEBUG) VolleyLog.v("start new dispatcher");
        Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);

        // Make a blocking call to initialize the cache.
        mCache.initialize();

        while (true) {
            try {
                // Get a request from the cache triage queue, blocking until
                // at least one is available.
                final Request<?> request = mCacheQueue.take();
                request.addMarker("cache-queue-take");

                // If the request has been canceled, don‘t bother dispatching it.
                if (request.isCanceled()) {
                    request.finish("cache-discard-canceled");
                    continue;
                }

                // Attempt to retrieve this item from cache.
                Cache.Entry entry = mCache.get(request.getCacheKey());
                if (entry == null) {
                    request.addMarker("cache-miss");
                    // Cache miss; send off to the network dispatcher.
                    mNetworkQueue.put(request);
                    continue;
                }

                // If it is completely expired, just send it to the network.
                if (entry.isExpired()) {
                    request.addMarker("cache-hit-expired");
                    request.setCacheEntry(entry);
                    mNetworkQueue.put(request);
                    continue;
                }

                // We have a cache hit; parse its data for delivery back to the request.
                request.addMarker("cache-hit");
                Response<?> response = request.parseNetworkResponse(
                        new NetworkResponse(entry.data, entry.responseHeaders));
                request.addMarker("cache-hit-parsed");

                if (!entry.refreshNeeded()) {
                    // Completely unexpired cache hit. Just deliver the response.
                    mDelivery.postResponse(request, response);
                } else {
                    // Soft-expired cache hit. We can deliver the cached response,
                    // but we need to also send the request to the network for
                    // refreshing.
                    request.addMarker("cache-hit-refresh-needed");
                    request.setCacheEntry(entry);

                    // Mark the response as intermediate.
                    response.intermediate = true;

                    // Post the intermediate response back to the user and have
                    // the delivery then forward the request along to the network.
                    mDelivery.postResponse(request, response, new Runnable() {
                        @Override
                        public void run() {
                            try {
                                mNetworkQueue.put(request);
                            } catch (InterruptedException e) {
                                // Not much we can do about this.
                            }
                        }
                    });
                }

            } catch (InterruptedException e) {
                // We may have been interrupted because it was time to quit.
                if (mQuit) {
                    return;
                }
                continue;
            }
        }
    }

缓存调度器线程在设置了线程优先级和初始化了缓存后就进入死循环,从mCacheQueue不断地获取请求并进行处理。

如果取出的请求已经被取消了,那么结束此请求,跳过本次循环,接着从队列中取出下一个请求处理

      if (request.isCanceled()) {
          request.finish("cache-discard-canceled");
          continue;
      }

如果取出的请求没有被取消,就在缓存中查找是否已经缓存该请求,缓存不命中,就把此请求丢到网络请求队列,从网络获取数据,然后跳出本次循环,接着从缓存请求队列读取下一个请求处理

      // Attempt to retrieve this item from cache.
      Cache.Entry entry = mCache.get(request.getCacheKey());
      if (entry == null) {
          request.addMarker("cache-miss");
          // Cache miss; send off to the network dispatcher.
          mNetworkQueue.put(request);
          continue;
      }

如果缓存命中了,但是缓存已经完全过期了,那么还是得重新从网络请求数据

                // If it is completely expired, just send it to the network.
                if (entry.isExpired()) {
                    request.addMarker("cache-hit-expired");
                    request.setCacheEntry(entry);
                    mNetworkQueue.put(request);
                    continue;
                }

如果缓存没有过期,并且不需要刷新,那么直接将缓存中的数据post给UI线程,避免了一次网络请求

        if (!entry.refreshNeeded()) {
           // Completely unexpired cache hit. Just deliver the response.
           mDelivery.postResponse(request, response);
        } 

如果缓存没有过期,但是需要刷新,这时候据先把缓存post给UI线程,然后后台去默默地请求网络数据,网络调度器会在请求到最新数据后刷新缓存和UI数据。

            else {
               // Soft-expired cache hit. We can deliver the cached response,
               // but we need to also send the request to the network for
               // refreshing.
               request.addMarker("cache-hit-refresh-needed");
               request.setCacheEntry(entry);

               // Mark the response as intermediate.
               response.intermediate = true;

               // Post the intermediate response back to the user and have
               // the delivery then forward the request along to the network.
               mDelivery.postResponse(request, response, new Runnable() {
                   @Override
                   public void run() {
                       try {
                           mNetworkQueue.put(request);
                       } catch (InterruptedException e) {
                           // Not much we can do about this.
                       }
                   }
               });
           }
    /**
     * Parses a response from the network or cache and delivers it. The provided
     * Runnable will be executed after delivery.
     */
    public void postResponse(Request<?> request, Response<?> response, Runnable runnable);

postResponse方法会先将缓存中的数据deliver给UI线程,然后执行Runnable,也就是 mNetworkQueue.put(request),通过网络获取最新数据

网络调度器NetworkDispatcher分析

NetworkDispatcher网络调度器CacheDispatcher缓存调度器非常类似,都是继承自Thread,都是面向接口编程的典范。

    /** The queue of requests to service. */
    private final BlockingQueue<Request<?>> mQueue;
    /** The network interface for processing requests. */
    private final Network mNetwork;
    /** The cache to write to. */
    private final Cache mCache;
    /** For posting responses and errors. */
    private final ResponseDelivery mDelivery;

网络调度器从网络请求队列mQueue中不断获取请求,通过mNetwork获取网络最新数据,把最新数据写入缓存mCache,通过mDelivery切换线程,把数据回调给UI线程

   public void run() {
        Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
        while (true) {
            long startTimeMs = SystemClock.elapsedRealtime();
            Request<?> request;
            try {
                // Take a request from the queue.
                request = mQueue.take();
            } catch (InterruptedException e) {
                // We may have been interrupted because it was time to quit.
                if (mQuit) {
                    return;
                }
                continue;
            }

            try {
                request.addMarker("network-queue-take");

                // If the request was cancelled already, do not perform the
                // network request.
                if (request.isCanceled()) {
                    request.finish("network-discard-cancelled");
                    continue;
                }

                addTrafficStatsTag(request);

                // Perform the network request.
                NetworkResponse networkResponse = mNetwork.performRequest(request);
                request.addMarker("network-http-complete");

                // If the server returned 304 AND we delivered a response already,
                // we‘re done -- don‘t deliver a second identical response.
                if (networkResponse.notModified && request.hasHadResponseDelivered()) {
                    request.finish("not-modified");
                    continue;
                }

                // Parse the response here on the worker thread.
                Response<?> response = request.parseNetworkResponse(networkResponse);
                request.addMarker("network-parse-complete");

                // Write to cache if applicable.
                // TODO: Only update cache metadata instead of entire record for 304s.
                if (request.shouldCache() && response.cacheEntry != null) {
                    mCache.put(request.getCacheKey(), response.cacheEntry);
                    request.addMarker("network-cache-written");
                }

                // Post the response back.
                request.markDelivered();
                mDelivery.postResponse(request, response);
            } catch (VolleyError volleyError) {
                volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
                parseAndDeliverNetworkError(request, volleyError);
            } catch (Exception e) {
                VolleyLog.e(e, "Unhandled exception %s", e.toString());
                VolleyError volleyError = new VolleyError(e);
                volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs);
                mDelivery.postError(request, volleyError);
            }
        }
    }

网络调度器线程在设置了线程优先级后就进入死循环,不断从网络请求队列读取请求进行处理

如果取出的请求已被取消,那么结束此请求,结束本次循环,从队列读取下一个请求进行处理

     // If the request was cancelled already, do not perform the
     // network request.
     if (request.isCanceled()) {
         request.finish("network-discard-cancelled");
         continue;
     }

该请求没有被取消的话,通过mNetwork进行网络数据访问

    // Perform the network request.
    NetworkResponse networkResponse = mNetwork.performRequest(request);

服务器返回304,表示请求的资源和上次相比没有修改。

如果服务器返回304,并且该请求已经之前已经请求到了response,那么之前的response可以复用,没必要重新网络请求,结束本次循环,从网络请求队列获取下一条请求进行处理

    // If the server returned 304 AND we delivered a response already,
    // we‘re done -- don‘t deliver a second identical response.
    if (networkResponse.notModified && request.hasHadResponseDelivered()) {
        request.finish("not-modified");
        continue;
    }

/**

* Returns true if this request has had a response delivered for it.

*/

public boolean hasHadResponseDelivered()

如果服务器返回的状态码不是304,说明请求的数据在服务器上已经发生了变化,需要重新并解析获取response

    // Parse the response here on the worker thread.
    Response<?> response = request.parseNetworkResponse(networkResponse);

如果该request需要缓存并且response没有出错,就把这次网络请求的response写入缓存

    if (request.shouldCache() && response.cacheEntry != null) {
        mCache.put(request.getCacheKey(), response.cacheEntry);
        request.addMarker("network-cache-written");
    }

最后把response传递给用户即可

    mDelivery.postResponse(request, response);

如果网络请求失败,同样也罢请求失败的情况传递给用户,让用户自己处理

    mDelivery.postError(request, volleyError);

RequestQueue的属性分析

  /** Used for generating monotonically-increasing sequence numbers for requests. */
    private AtomicInteger mSequenceGenerator = new AtomicInteger();

添加进请求队列的每一个请求都会创建全局唯一的一个序列号,使用AtomicInteger类型的原子整数保证多线程并发添加请求的情况下不会出现序列号的重复

    /**
     * Staging area for requests that already have a duplicate request in flight.
     *containsKey(cacheKey) indicates that there is a request in flight for the given cache key.
     get(cacheKey) returns waiting requests for the given cache key. The in flight request is not contained in that list. Is null if no requests are staged.
     */
    private final Map<String, Queue<Request<?>>> mWaitingRequests =
            new HashMap<String, Queue<Request<?>>>();

mWaitingRequests是一个Map,键是cache key,值是一个队列,该队列包含了所有等待针对cache key请求结果的的请求,即同一个url的重复请求队列

    /**
     * The set of all requests currently being processed by this RequestQueue. A Request
     * will be in this set if it is waiting in any queue or currently being processed by
     * any dispatcher.
     */
    private final Set<Request<?>> mCurrentRequests = new HashSet<Request<?>>();

mCurrentRequests是一个集合,里边包含了所有正在进行中的请求。

    /** The cache triage queue. */
    private final PriorityBlockingQueue<Request<?>> mCacheQueue =
        new PriorityBlockingQueue<Request<?>>();

    /** The queue of requests that are actually going out to the network. */
    private final PriorityBlockingQueue<Request<?>> mNetworkQueue =
        new PriorityBlockingQueue<Request<?>>();

两个优先级队列,缓存请求队列和网络请求队列

生产者添加请求

 public <T> Request<T> add(Request<T> request) {
        // Tag the request as belonging to this queue and add it to the set of current requests.
        request.setRequestQueue(this);
        synchronized (mCurrentRequests) {
            mCurrentRequests.add(request);
        }

        // Process requests in the order they are added.
        request.setSequence(getSequenceNumber());
        request.addMarker("add-to-queue");

        // If the request is uncacheable, skip the cache queue and go straight to the network.
        if (!request.shouldCache()) {
            mNetworkQueue.add(request);
            return request;
        }

        // Insert request into stage if there‘s already a request with the same cache key in flight.
        synchronized (mWaitingRequests) {
            String cacheKey = request.getCacheKey();
            if (mWaitingRequests.containsKey(cacheKey)) {
                // There is already a request in flight. Queue up.
                Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);
                if (stagedRequests == null) {
                    stagedRequests = new LinkedList<Request<?>>();
                }
                stagedRequests.add(request);
                mWaitingRequests.put(cacheKey, stagedRequests);
                if (VolleyLog.DEBUG) {
                    VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
                }
            } else {
                // Insert ‘null‘ queue for this cacheKey, indicating there is now a request in
                // flight.
                mWaitingRequests.put(cacheKey, null);
                mCacheQueue.add(request);
            }
            return request;
        }
    }

先把新添加的请求绑定到请求队列,并把该请求添加到正在处理的请求的集合

        // Tag the request as belonging to this queue and add it to the set of current requests.
        request.setRequestQueue(this);
        synchronized (mCurrentRequests) {
            mCurrentRequests.add(request);
        }

给新添加的请求设置一个序列号,序列号按照添加的顺序递增

        // Process requests in the order they are added.
        request.setSequence(getSequenceNumber());

如果该请求被设置不缓存,那么直接跳过缓存调度器,把该请求添加到网络请求队列即可。

        // If the request is uncacheable, skip the cache queue and go straight to the network.
        if (!request.shouldCache()) {
            mNetworkQueue.add(request);
            return request;
        }

然后判断之前是否已经有针对此cache key的重复请求(相同url的请求),有的话直接放入该cache key的等待队列(重复请求队列),避免多次请求同一个url。

   if (mWaitingRequests.containsKey(cacheKey)) {
            // There is already a request in flight. Queue up.
            Queue<Request<?>> stagedRequests = mWaitingRequests.get(cacheKey);
            if (stagedRequests == null) {
                stagedRequests = new LinkedList<Request<?>>();
            }
            stagedRequests.add(request);
            mWaitingRequests.put(cacheKey, stagedRequests);
            }

如果不是重复的请求,这个请求是第一次添加到队列的,那么,设置该该请求的cache key的等待队列(重复请求队列)是null,将该请求添加进缓存队列

    // Insert ‘null‘ queue for this cacheKey, indicating there is now a request in
    // flight.
    mWaitingRequests.put(cacheKey, null);
    mCacheQueue.add(request);

线程切换,把请求结果递交给用户ResponseDelivery

ResponseDelivery是一个接口,有一个直接实现类ExecutorDelivery

public class ExecutorDelivery implements ResponseDelivery

    /** Used for posting responses, typically to the main thread. */
    private final Executor mResponsePoster;

    /**
     * Creates a new response delivery interface.
     * @param handler {@link Handler} to post responses on
     */
    public ExecutorDelivery(final Handler handler) {
        // Make an Executor that just wraps the handler.
        mResponsePoster = new Executor() {
            @Override
            public void execute(Runnable command) {
                handler.post(command);
            }
        };
    }

ExecutorDelivery构造函数需要传入一个handler,通过handler达到线程切换的目的,如果传入的handler绑定的是UI线程的Looper,那么command任务将在UI线程被执行

    @Override
    public void postResponse(Request<?> request, Response<?> response, Runnable runnable) {
        request.markDelivered();
        request.addMarker("post-response");
        mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));
    }

postResponse方法创建了一个ResponseDeliveryRunnable类型的任务传递给execute方法,导致ResponseDeliveryRunnable该任务将在UI线程被执行,ResponseDeliveryRunnable主要在UI线程做了什么工作呢?

        public void run() {
            // If this request has canceled, finish it and don‘t deliver.
            if (mRequest.isCanceled()) {
                mRequest.finish("canceled-at-delivery");
                return;
            }

            // Deliver a normal response or error, depending.
            if (mResponse.isSuccess()) {
                mRequest.deliverResponse(mResponse.result);
            } else {
                mRequest.deliverError(mResponse.error);
            }

            // If this is an intermediate response, add a marker, otherwise we‘re done
            // and the request can be finished.
            if (mResponse.intermediate) {
                mRequest.addMarker("intermediate-response");
            } else {
                mRequest.finish("done");
            }

            // If we have been provided a post-delivery runnable, run it.
            if (mRunnable != null) {
                mRunnable.run();
            }

这里按照请求执行的结果分类进行了处理,如果该请求已经被取消了那么直接调用该请求的finish方法然后直接返回

        if (mRequest.isCanceled()) {
            mRequest.finish("canceled-at-delivery");
            return;
        }

请求成功的话,调用mRequest的deliverResponse方法,请求失败的话调用mRequest的deliverError方法

       // Deliver a normal response or error, depending.
       if (mResponse.isSuccess()) {
           mRequest.deliverResponse(mResponse.result);
       } else {
           mRequest.deliverError(mResponse.error);
       }

mRequest的类型Request,而Request是一个抽象的类型,实际运行中mRequest的类型将会是StringRequest、JsonRequest等等默认实现类或者用户自己继承Request实现的具体逻辑,充分体现了面向抽象编程,而不是面向具体编程。

现在知道了在主线程中会调用mRequest的deliverResponse方法(假定请求成功),以StringRequest为例,看看deliverResponse到底做了什么

    @Override
    protected void deliverResponse(String response) {
        mListener.onResponse(response);
    }
    public StringRequest(int method, String url, Listener<String> listener,
            ErrorListener errorListener) {
        super(method, url, errorListener);
        mListener = listener;
    }
}

可见在deliverResponse方法中调用了回调接口的onResponse方法,mListener通过StringRequest的构造函数通过用户传进来,这样请求成功的话,用户可以直接在实现的Listener的onResponse方法中获取到请求成功的String结果。

从哪里可以看出请求结果是通过handler传递给UI线程处理的呢,通过RequestQueue的构造器

    public RequestQueue(Cache cache, Network network, int threadPoolSize) {
        this(cache, network, threadPoolSize,
                new ExecutorDelivery(new Handler(Looper.getMainLooper())));
    }
    public RequestQueue(Cache cache, Network network, int threadPoolSize,
            ResponseDelivery delivery) {
        mCache = cache;
        mNetwork = network;
        mDispatchers = new NetworkDispatcher[threadPoolSize];
        mDelivery = delivery;
    }

new ExecutorDelivery(new Handler(Looper.getMainLooper()))

通过Looper.getMainLooper()给handler绑定了UI线程的Looper,这样请求结果就被切换到UI线程中传递给用户.

时间: 2024-09-30 14:45:13

Volley源码分析【面向接口编程的典范】的相关文章

[Android]Volley源码分析(五)

前面几篇通过源码分析了Volley是怎样进行请求调度及请求是如何被实际执行的,这篇最后来看下请求结果是如何交付给请求者的(一般是Android的UI主线程). 类图: 请求结果的交付是通过ResponseDelivery接口完成的,它有一个实现类ExecutorDelivery, 主要有postResponse()与postError()两个方法,分别在请求成功或失败时将结果提交给请求发起者. 1. 首先,在NetworkDispatcher的run()方法中,当服务器返回响应并解析完后,会调用

Volley源码分析(2)----ImageLoader

一:imageLoader 先来看看如何使用imageloader: public void showImg(View view){ ImageView imageView = (ImageView)this.findViewById(R.id.image_view); RequestQueue mQueue = Volley.newRequestQueue(getApplicationContext()); ImageLoader imageLoader = new ImageLoader(m

[Android]Volley源码分析(二)

上一篇介绍了Volley的使用,主要接触了Request与RequestQueue这两个类,这篇就来了解一下这两个类的具体实现. Request类图: Request类: Request是一个抽象类,其中的主要属性: mMethod: 请求方法,目前支持GET, POST, PUT, DELETE, HEAD, OPTIONS,TRACE, PATCH方法 mUrl: 请求Url mErrorListener: 错误处理监听器,请求出错时调用 mSequence: 请求的序号,相同优先级的请求在

[Android]Volley源码分析(四)

上篇中有提到NetworkDispatcher是通过mNetwork(Network类型)来进行网络访问的,现在来看一下关于Network是如何进行网络访问的. Network部分的类图: Network有一个实现类BasicNetwork,它有一个mHttpStack的属性,实际的网络请求是由这个mHttpStack来进行的,看BasicNetwork的performRequest()方法, 1 @Override 2 public NetworkResponse performRequest

Volley源码分析

Volley源码分析 Volley简介 volley官方地址 在Google I/0 2013中发布了Volley.Volley是Android平台上的网络通信库,能使网络通信更快,更简单,更健壮. 这是Volley名称的由来:a burst or emission of many things or a large amount at once.Volley特别适合数据量不大但是通信频繁的场景. Github上面已经有大神做了镜像,使用Gradle更方便.Volley On Github Vo

[Android]Volley源码分析(二)Cache

Cache作为Volley最为核心的一部分,Volley花了重彩来实现它.本章我们顺着Volley的源码思路往下,来看下Volley对Cache的处理逻辑. 我们回想一下昨天的简单代码,我们的入口是从构造一个Request队列开始的,而我们并不直接调用new来构造,而是将控制权反转给Volley这个静态工厂来构造. com.android.volley.toolbox.Volley: public static RequestQueue newRequestQueue(Context conte

[Android]Volley源码分析(叁)Network

如果各位看官仔细看过我之前的文章,实际上Network这块的只是点小功能的补充.我们来看下NetworkDispatcher的核心处理逻辑: <span style="font-size:18px;">while (true) { try { // Take a request from the queue. request = mQueue.take(); } catch (InterruptedException e) { // We may have been int

[Android]Volley源码分析(肆)应用

通过前面的讲述,相信你已经对Volley的原理有了一定了解.本章将举一些我们能在应用中直接用到的例子,第一个例子是 NetworkImageView类,其实NetworkImageView顾名思义就是将异步的操作封装在了控件本身,这种设计可以充分保留控件的移植性和维护性.NetworkImageView通过调用setImageUrl来指定具体的url: public void setImageUrl(String url, ImageLoader imageLoader) { mUrl = ur

[Android] Volley源码分析(一)体系结构

Volley:google出的一个用于异步处理的框架.由于本身的易用性和良好的api,使得它能得以广泛的应用.我还是一如既往从源码的方向上来把控它.我们先通过一段简单的代码来了解Volley RequestQueue queue = Volley.newRequestQueue(this); ImageRequest imagerequest = new ImageRequest(url, new Response.Listener<Bitmap>(){ @Override public vo

Volley源码分析之自定义MultiPartRequest(文件上传)

本篇内容目录: 使用HttpURLConnection上传文件到服务器案例 自定义支持文件上传的MultiPartRequest Web后台接收文件的部分代码 先来看下HttpURLConnection来文件上传的案例: 1.传送数据到服务器,必定是使用POST请求: //设置请求方式为post httpURLConnection.setDoOutput(true); httpURLConnection.setRequestMethod("POST"); 2.上传文件的HTTP请求中的