Solr4.8.0源码分析(5)之查询流程分析总述

Solr4.8.0源码分析(5)之查询流程分析总述

前面已经写到,solr查询是通过http发送命令,solr servlet接受并进行处理。所以solr的查询流程从SolrDispatchsFilter的dofilter开始。dofilter包含了对http的各个请求的操作。Solr的查询方式有很多,比如q,fq等,本章只关注select和q。页面下发的查询请求如下:http://localhost:8080/solr/test/select?q=code%3A%E8%BE%BD*+AND+last_modified%3A%5B0+TO+1408454600265%5D+AND+id%3Acheng&wt=json&indent=true

1   @Override
2   public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {
3     doFilter(request, response, chain, false);
4   }

由于只关注select,实际的查询是从如下代码开始:this.execute()是查询的入口函数。这里需要注意下writeResponse()函数。execute只是获取了符合查询条件的doc id,最后在writeResponse()中会根据doc id获取stored属性的字段信息,并写入返回结果。

 1  // With a valid handler and a valid core...
 2           if( handler != null ) {
 3             // if not a /select, create the request
 4             if( solrReq == null ) {
 5               solrReq = parser.parse( core, path, req );
 6             }
 7
 8             if (usingAliases) {
 9               processAliases(solrReq, aliases, collectionsList);
10             }
11
12             final Method reqMethod = Method.getMethod(req.getMethod());
13             HttpCacheHeaderUtil.setCacheControlHeader(config, resp, reqMethod);
14             // unless we have been explicitly told not to, do cache validation
15             // if we fail cache validation, execute the query
16             if (config.getHttpCachingConfig().isNever304() ||
17                 !HttpCacheHeaderUtil.doCacheHeaderValidation(solrReq, req, reqMethod, resp)) {
18                 SolrQueryResponse solrRsp = new SolrQueryResponse();
19                 /* even for HEAD requests, we need to execute the handler to
20                  * ensure we don‘t get an error (and to make sure the correct
21                  * QueryResponseWriter is selected and we get the correct
22                  * Content-Type)
23                  */
24                 SolrRequestInfo.setRequestInfo(new SolrRequestInfo(solrReq, solrRsp));
25                 this.execute( req, handler, solrReq, solrRsp );
26                 HttpCacheHeaderUtil.checkHttpCachingVeto(solrRsp, resp, reqMethod);
27               // add info to http headers
28               //TODO: See SOLR-232 and SOLR-267.
29                 /*try {
30                   NamedList solrRspHeader = solrRsp.getResponseHeader();
31                  for (int i=0; i<solrRspHeader.size(); i++) {
32                    ((javax.servlet.http.HttpServletResponse) response).addHeader(("Solr-" + solrRspHeader.getName(i)), String.valueOf(solrRspHeader.getVal(i)));
33                  }
34                 } catch (ClassCastException cce) {
35                   log.log(Level.WARNING, "exception adding response header log information", cce);
36                 }*/
37                QueryResponseWriter responseWriter = core.getQueryResponseWriter(solrReq);
38                writeResponse(solrRsp, response, responseWriter, solrReq, reqMethod);
39             }

进入excute后会进入SolrCore的excute(), preDecorateResponse 对结果的头信息比如进行预处理,postDecorateResponse对将时间、返回结果写入response中。handleRequest继续进行查询操作。

 1   public void execute(SolrRequestHandler handler, SolrQueryRequest req, SolrQueryResponse rsp) {
 2     if (handler==null) {
 3       String msg = "Null Request Handler ‘" +
 4         req.getParams().get(CommonParams.QT) + "‘";
 5
 6       if (log.isWarnEnabled()) log.warn(logid + msg + ":" + req);
 7
 8       throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, msg);
 9     }
10
11     preDecorateResponse(req, rsp);
12
13     // TODO: this doesn‘t seem to be working correctly and causes problems with the example server and distrib (for example /spell)
14     // if (req.getParams().getBool(ShardParams.IS_SHARD,false) && !(handler instanceof SearchHandler))
15     //   throw new SolrException(SolrException.ErrorCode.BAD_REQUEST,"isShard is only acceptable with search handlers");
16
17
18     handler.handleRequest(req,rsp);
19     postDecorateResponse(handler, req, rsp);
20
21     if (log.isInfoEnabled() && rsp.getToLog().size() > 0) {
22       log.info(rsp.getToLogAsString(logid));
23     }
24   }

RequestHandlerBase.handleRequest(SolrQueryRequest req, SolrQueryResponse rsp)再次调用了SearchHandle.handleRequestBody(SolrQueryRequest req, SolrQueryResponse rsp),这是时候才真正开始加载QueryComponents。

以下语句会加载查询有关的组件,包括QueryComponents,FacetComponents,MoreLikeThisComponent,HighlightComponent,StatsComponent,

DebugComponent,ExpandComponent。本文只关注查询,所以进入的QueryComponent.java.

for( SearchComponent c : components ) {
    c.process(rb);
}    

暂且不提QueryComponent.java中的关于Query的处理(查询的细节将在后面章节中说明,本章只作总述),QueryComponent.process

(ResponseBuilder rb) 会调用SolrindexSearch.search(QueryResult qr, QueryCommand cmd)进行查询,并在后续代码中对返回的结果进行处理,主要包括doFieldSortValues(rb, searcher);和doPrefetch(rb);

 1     // normal search result
 2     searcher.search(result,cmd);
 3     rb.setResult( result );
 4
 5     ResultContext ctx = new ResultContext();
 6     ctx.docs = rb.getResults().docList;
 7     ctx.query = rb.getQuery();
 8     rsp.add("response", ctx);
 9     rsp.getToLog().add("hits", rb.getResults().docList.matches());
10
11     if ( ! rb.req.getParams().getBool(ShardParams.IS_SHARD,false) ) {
12       if (null != rb.getNextCursorMark()) {
13         rb.rsp.add(CursorMarkParams.CURSOR_MARK_NEXT,
14                    rb.getNextCursorMark().getSerializedTotem());
15       }
16     }
17     doFieldSortValues(rb, searcher);
18     doPrefetch(rb);

SolrindexSearch.search函数比较简单,只是调用了SolrindexSearch.getDocListC.顾名思义,该函数返回了查询结果的doc id 的list。这时候才是真正的查询开始。查询之前,Solr会从queryResultCache缓存里面读取该条件的结果,queryResultCache里面存放了查询条件和查询结果的键值对。如果queryResultCache里面有这个查询条件,那Solr就会直接返回查询条件的值。如果没有该查询条件,则会进行正常查询,并把查询条件和查询命令写入queryResultCache的键值对里。queryResultCache具有容量大小,可以在solrconfig的缓存配置里进行配置。

 1     // we can try and look up the complete query in the cache.
 2     // we can‘t do that if filter!=null though (we don‘t want to
 3     // do hashCode() and equals() for a big DocSet).
 4     if (queryResultCache != null && cmd.getFilter()==null
 5         && (flags & (NO_CHECK_QCACHE|NO_SET_QCACHE)) != ((NO_CHECK_QCACHE|NO_SET_QCACHE)))
 6     {
 7         // all of the current flags can be reused during warming,
 8         // so set all of them on the cache key.
 9         key = new QueryResultKey(q, cmd.getFilterList(), cmd.getSort(), flags);
10         if ((flags & NO_CHECK_QCACHE)==0) {
11           superset = queryResultCache.get(key);
12
13           if (superset != null) {
14             // check that the cache entry has scores recorded if we need them
15             if ((flags & GET_SCORES)==0 || superset.hasScores()) {
16               // NOTE: subset() returns null if the DocList has fewer docs than
17               // requested
18               out.docList = superset.subset(cmd.getOffset(),cmd.getLen());
19             }
20           }
21           if (out.docList != null) {
22             // found the docList in the cache... now check if we need the docset too.
23             // OPT: possible future optimization - if the doclist contains all the matches,
24             // use it to make the docset instead of rerunning the query.
25             if (out.docSet==null && ((flags & GET_DOCSET)!=0) ) {
26               if (cmd.getFilterList()==null) {
27                 out.docSet = getDocSet(cmd.getQuery());
28               } else {
29                 List<Query> newList = new ArrayList<>(cmd.getFilterList().size()+1);
30                 newList.add(cmd.getQuery());
31                 newList.addAll(cmd.getFilterList());
32                 out.docSet = getDocSet(newList);
33               }
34             }
35             return;
36           }
37         }
38
39       // If we are going to generate the result, bump up to the
40       // next resultWindowSize for better caching.
41
42       if ((flags & NO_SET_QCACHE) == 0) {
43         // handle 0 special case as well as avoid idiv in the common case.
44         if (maxDocRequested < queryResultWindowSize) {
45           supersetMaxDoc=queryResultWindowSize;
46         } else {
47           supersetMaxDoc = ((maxDocRequested -1)/queryResultWindowSize + 1)*queryResultWindowSize;
48           if (supersetMaxDoc < 0) supersetMaxDoc=maxDocRequested;
49         }
50       } else {
51         key = null;  // we won‘t be caching the result
52       }
53     }

如果没有复合的缓存,那么将进行正常的查询。这里查询会走排序和非排序的查询分支(两个分支的差别将在后续文章中写道)。最后查询会进入getDocListNC(qr,cmd)函数继续进行查询。superset.subset()会对查询结果进行截断,比如我查询的结果start=20,row=40,那么Solr查询实际的结果是start=0,row=60,也就是至少说会查(start+row)个结果,然后再获取第20到第60的结果集。

if (useFilterCache) {
      // now actually use the filter cache.
      // for large filters that match few documents, this may be
      // slower than simply re-executing the query.
      if (out.docSet == null) {
        out.docSet = getDocSet(cmd.getQuery(),cmd.getFilter());
        DocSet bigFilt = getDocSet(cmd.getFilterList());
        if (bigFilt != null) out.docSet = out.docSet.intersection(bigFilt);
      }
      // todo: there could be a sortDocSet that could take a list of
      // the filters instead of anding them first...
      // perhaps there should be a multi-docset-iterator
      sortDocSet(qr, cmd);
    } else {
      // do it the normal way...
      if ((flags & GET_DOCSET)!=0) {
        // this currently conflates returning the docset for the base query vs
        // the base query and all filters.
        DocSet qDocSet = getDocListAndSetNC(qr,cmd);
        // cache the docSet matching the query w/o filtering
        if (qDocSet!=null && filterCache!=null && !qr.isPartialResults()) filterCache.put(cmd.getQuery(),qDocSet);
      } else {
        getDocListNC(qr,cmd);
      }
      assert null != out.docList : "docList is null";
    }

    if (null == cmd.getCursorMark()) {
      // Kludge...
      // we can‘t use DocSlice.subset, even though it should be an identity op
      // because it gets confused by situations where there are lots of matches, but
      // less docs in the slice then were requested, (due to the cursor)
      // so we have to short circuit the call.
      // None of which is really a problem since we can‘t use caching with
      // cursors anyway, but it still looks weird to have to special case this
      // behavior based on this condition - hence the long explanation.
      superset = out.docList;
      out.docList = superset.subset(cmd.getOffset(),cmd.getLen());
    } else {
      // sanity check our cursor assumptions
      assert null == superset : "cursor: superset isn‘t null";
      assert 0 == cmd.getOffset() : "cursor: command offset mismatch";
      assert 0 == out.docList.offset() : "cursor: docList offset mismatch";
      assert cmd.getLen() >= supersetMaxDoc : "cursor: superset len mismatch: " +
        cmd.getLen() + " vs " + supersetMaxDoc;
    }

SolrIndexSearch.getDocListNC(qr,cmd)里面定义了许多Collector的内部类,不过暂时与本章节无关,所以直接查看以下这段代码。首先Solr会创建TopDocsCollector,它会存放所有复合查询条件的结果集。如果查询的时候设置了timeAllowed开关,那么查询就会走TimeLimitingCollector分支。TimeLimitingCollector是Collector的子类,当timeAllowed设定一个数字时,比如200ms,如果Solr查询一旦获取到结果就会在200ms内返回,不管查询的结果是否已经完整。可以看见最后查询过程最后调用了Lucene IndexSearch.Search(),这层开始进入Lucene.最后Solr会对TopDocsCollector的结果总数以及优先级队列进行处理。

 1 final TopDocsCollector topCollector = buildTopDocsCollector(len, cmd);
 2       Collector collector = topCollector;
 3       if (terminateEarly) {
 4         collector = new EarlyTerminatingCollector(collector, cmd.len);
 5       }
 6       if( timeAllowed > 0 ) {
 7         collector = new TimeLimitingCollector(collector, TimeLimitingCollector.getGlobalCounter(), timeAllowed);
 8       }
 9       if (pf.postFilter != null) {
10         pf.postFilter.setLastDelegate(collector);
11         collector = pf.postFilter;
12       }
13       try {
14         super.search(query, luceneFilter, collector);
15         if(collector instanceof DelegatingCollector) {
16           ((DelegatingCollector)collector).finish();
17         }
18       }
19       catch( TimeLimitingCollector.TimeExceededException x ) {
20         log.warn( "Query: " + query + "; " + x.getMessage() );
21         qr.setPartialResults(true);
22       }
23
24       totalHits = topCollector.getTotalHits();
25       TopDocs topDocs = topCollector.topDocs(0, len);
26       populateNextCursorMarkFromTopDocs(qr, cmd, topDocs);
27
28       maxScore = totalHits>0 ? topDocs.getMaxScore() : 0.0f;
29       nDocsReturned = topDocs.scoreDocs.length;
30       ids = new int[nDocsReturned];
31       scores = (cmd.getFlags()&GET_SCORES)!=0 ? new float[nDocsReturned] : null;
32       for (int i=0; i<nDocsReturned; i++) {
33         ScoreDoc scoreDoc = topDocs.scoreDocs[i];
34         ids[i] = scoreDoc.doc;
35         if 

进入Lucene的IndexSearch.Search()后,Solr开始对所有Segment进行遍历,AtomicReaderContext包含了Segment的所有信息,包括docbase,doc的个数。

遍历完后,会调用Weight.bulkScore()对多个条件进行重组,比如多个OR的条件组成一个条件,多个AND的查询条件再组成一个List。Weight.bulkScore()会对这个List按照查询条件的词频进行排序。对条件处理好以后,就是会从segment里面获取所有符合查询条件的doc id(具体的获取方法,在后续的文章里会详细介绍),这就是scorer.score(collector);的作用了。

 1  /**
 2    * Lower-level search API.
 3    *
 4    * <p>
 5    * {@link Collector#collect(int)} is called for every document. <br>
 6    *
 7    * <p>
 8    * NOTE: this method executes the searches on all given leaves exclusively.
 9    * To search across all the searchers leaves use {@link #leafContexts}.
10    *
11    * @param leaves
12    *          the searchers leaves to execute the searches on
13    * @param weight
14    *          to match documents
15    * @param collector
16    *          to receive hits
17    * @throws BooleanQuery.TooManyClauses If a query would exceed
18    *         {@link BooleanQuery#getMaxClauseCount()} clauses.
19    */
20   protected void search(List<AtomicReaderContext> leaves, Weight weight, Collector collector)
21       throws IOException {
22
23     // TODO: should we make this
24     // threaded...?  the Collector could be sync‘d?
25     // always use single thread:
26     for (AtomicReaderContext ctx : leaves) { // search each subreader
27       try {
28         collector.setNextReader(ctx);
29       } catch (CollectionTerminatedException e) {
30         // there is no doc of interest in this reader context
31         // continue with the following leaf
32         continue;
33       }
34       BulkScorer scorer = weight.bulkScorer(ctx, !collector.acceptsDocsOutOfOrder(), ctx.reader().getLiveDocs());
35       if (scorer != null) {
36         try {
37           scorer.score(collector);
38         } catch (CollectionTerminatedException e) {
39           // collection was terminated prematurely
40           // continue with the following leaf
41         }
42       }
43     }
44   }

到这一步已经获取到符合查询条件的所有doc id了,但是我们的查询结果是需要显示多有的字段的,所以也就是说Solr后面还是会根据doc id再次取segment获取所有字段信息,至于这是在哪里实现的,在后续文章中会详细描述。

总结: Solr的查询过程还是比较绕的,且有很多可以优化的地方。本文主要简述了Solr查询的流程,对查询过程中的细节将在后续的文章里面具体阐述。

Solr4.8.0源码分析(5)之查询流程分析总述,布布扣,bubuko.com

时间: 2024-10-01 10:48:20

Solr4.8.0源码分析(5)之查询流程分析总述的相关文章

Solr4.8.0源码分析(22)之 SolrCloud的Recovery策略(三)

Solr4.8.0源码分析(22)之 SolrCloud的Recovery策略(三) 本文是SolrCloud的Recovery策略系列的第三篇文章,前面两篇主要介绍了Recovery的总体流程,以及PeerSync策略.本文以及后续的文章将重点介绍Replication策略.Replication策略不但可以在SolrCloud中起到leader到replica的数据同步,也可以在用多个单独的Solr来实现主从同步.本文先介绍在SolrCloud的leader到replica的数据同步,下一篇

Solr4.8.0源码分析(10)之Lucene的索引文件(3)

Solr4.8.0源码分析(10)之Lucene的索引文件(3) 1. .si文件 .si文件存储了段的元数据,主要涉及SegmentInfoFormat.java和Segmentinfo.java这两个文件.由于本文介绍的Solr4.8.0,所以对应的是SegmentInfoFormat的子类Lucene46SegmentInfoFormat. 首先来看下.si文件的格式 头部(header) 版本(SegVersion) doc个数(SegSize) 是否符合文档格式(IsCompoundF

Solr4.8.0源码分析(4)之Eclipse Solr调试环境搭建

Solr4.8.0源码分析(4)之Eclipse Solr调试环境搭建 由于公司里的Solr调试都是用远程jpda进行的,但是家里只有一台电脑所以不能jpda进行调试,这是因为jpda的端口冲突.所以只能在Eclipse 搭建Solr的环境,折腾了一小时终于完成了. 1. JDPA远程调试 搭建换完成Solr环境后,对${TOMCAT_HOME}/bin/startup.sh 最后一行进行修改,如下所示: 1 set JPDA_ADDRESS=7070 2 exec "$PRGDIR"

Solr4.8.0源码分析(25)之SolrCloud的Split流程

Solr4.8.0源码分析(25)之SolrCloud的Split流程(一) 题记:昨天有位网友问我SolrCloud的split的机制是如何的,这个还真不知道,所以今天抽空去看了Split的原理,大致也了解split的原理了,所以也就有了这篇文章.本系列有两篇文章,第一篇为core split,第二篇为collection split. 1. 简介 这里首先需要介绍一个比较容易混淆的概念,其实Solr的HTTP API 和 SolrCloud的HTTP API是不一样,如果接受到的是Solr的

Solr4.8.0源码分析(24)之SolrCloud的Recovery策略(五)

Solr4.8.0源码分析(24)之SolrCloud的Recovery策略(五) 题记:关于SolrCloud的Recovery策略已经写了四篇了,这篇应该是系统介绍Recovery策略的最后一篇了.本文主要介绍Solr的主从同步复制.它与前文<Solr4.8.0源码分析(22)之SolrCloud的Recovery策略(三)>略有不同,前文讲到的是SolrCloud的leader与replica之间的同步,不需要通过配置solrconfig.xml来实现.而本文主要介绍单机模式下,利用so

Solr4.8.0源码分析(19)之缓存机制(二)

Solr4.8.0源码分析(19)之缓存机制(二) 前文<Solr4.8.0源码分析(18)之缓存机制(一)>介绍了Solr缓存的生命周期,重点介绍了Solr缓存的warn过程.本节将更深入的来介绍下Solr的四种缓存类型,以及两种SolrCache接口实现类. 1.SolrCache接口实现类 前文已经提到SolrCache有两种接口实现类:solr.search.LRUCache 和 solr.search.LRUCache. 那么两者具体有啥区别呢? 1.1 solr.search.LR

Solr4.8.0源码分析(17)之SolrCloud索引深入(4)

Solr4.8.0源码分析(17)之SolrCloud索引深入(4) 前面几节以add为例已经介绍了solrcloud索引链建索引的三步过程,delete以及deletebyquery跟add过程大同小异,这里暂时就不介绍了.由于commit流程较为特殊,那么本节主要简要介绍下commit的流程. 1. SolrCloud的commit流程 SolrCloud的commit流程同样分为三步,本节主要简单介绍下三步过程. 1.1 LogUpdateProcessor LogUpdateProces

Solr4.8.0源码分析(14)之SolrCloud索引深入(1)

Solr4.8.0源码分析(14) 之 SolrCloud索引深入(1) 上一章节<Solr In Action 笔记(4) 之 SolrCloud分布式索引基础>简要学习了SolrCloud的索引过程,本节开始将通过阅读源码来深入学习下SolrCloud的索引过程. 1. SolrCloud的索引过程流程图 这里借用下<solrCloud Update Request Handling 更新索引流程>流程图: 由上图可以看出,SolrCloud的索引过程主要通过一个索引链过程来实

Solr4.8.0源码分析(12)之Lucene的索引文件(5)

Solr4.8.0源码分析(12)之Lucene的索引文件(5) 1. 存储域数据文件(.fdt和.fdx) Solr4.8.0里面使用的fdt和fdx的格式是lucene4.1的.为了提升压缩比,StoredFieldsFormat以16KB为单位对文档进行压缩,使用的压缩算法是LZ4,由于它更着眼于速度而不是压缩比,所以它能快速压缩以及解压. 1.1 存储域数据文件(.fdt) 真正保存存储域(stored field)信息的是fdt文件,该文件存放了压缩后的文档,按16kb或者更大的模块大

Solr4.8.0源码分析(11)之Lucene的索引文件(4)

Solr4.8.0源码分析(11)之Lucene的索引文件(4) 1. .dvd和.dvm文件 .dvm是存放了DocValue域的元数据,比如DocValue偏移量. .dvd则存放了DocValue的数据. 在Solr4.8.0中,dvd以及dvm用到的Lucene编码格式是Lucene45DocValuesFormat.跟之前的文件格式类似,它分别包含Lucene45DocValuesProducer 和Lucene45DocValuesConsumer来实现该文件的读和写. 1 @Ove