关于使用Filter减少Lucene tf idf打分计算的调研

将query改成filter,lucene中有个QueryWrapperFilter性能比较差,所以基本上都需要自己写filter,包括TermFilter,ExactPhraseFilter,ConjunctionFilter,DisjunctionFilter。

这几天验证下来,还是or改善最明显,4个termfilter,4508个返回结果,在我本机上性能提高1/3。ExactPhraseFilter也有小幅提升(5%-10%)。

最令人不解的是and,原来以为跟结果数和子查询数相关,但几次测试基本都是下降。

附ExactPhraseFilter和ut代码:

import java.io.IOException;
import java.util.ArrayList;

import org.apache.lucene.index.AtomicReaderContext;
import org.apache.lucene.index.DocsAndPositionsEnum;
import org.apache.lucene.index.Term;
import org.apache.lucene.index.TermContext;
import org.apache.lucene.index.TermState;
import org.apache.lucene.index.Terms;
import org.apache.lucene.index.TermsEnum;
import org.apache.lucene.search.DocIdSet;
import org.apache.lucene.search.DocIdSetIterator;
import org.apache.lucene.search.Filter;
import org.apache.lucene.util.ArrayUtil;
import org.apache.lucene.util.Bits;

// A fake to lucene phrase query, but far simplified.
public class ExactPhraseFilter extends Filter {
    protected final ArrayList<Term> terms = new ArrayList<Term>();
    protected final ArrayList<Integer> positions = new ArrayList<Integer>();

    protected String fieldName;

    public void add(Term term) {
        if (terms.size() == 0) {
            fieldName = term.field();
        } else {
            assert fieldName == term.field();
        }
        positions.add(Integer.valueOf(terms.size()));
        terms.add(term);
    }

    @Override
    public DocIdSet getDocIdSet(AtomicReaderContext context, Bits acceptDocs) throws IOException
    {
        return new ExactPhraseDocIdSet(context, acceptDocs);
    }

    static class PostingAndFreq implements Comparable<PostingAndFreq> {
        DocsAndPositionsEnum posEnum;
        int docFreq;
        int position;
        boolean useAdvance;
        int posFreq = 0;
        int pos = -1;
        int posTime = 0;

        public PostingAndFreq(DocsAndPositionsEnum posEnum, int docFreq, int position, boolean useAdvance) {
            this.posEnum = posEnum;
            this.docFreq = docFreq;
            this.position = position;
            this.useAdvance = useAdvance;
        }

        @Override
        public int compareTo(PostingAndFreq other) {
            if (docFreq != other.docFreq) {
                return docFreq - other.docFreq;
            }
            if (position != other.position) {
                return position - other.position;
            }
            return 0;
        }
    }

    protected class ExactPhraseDocIdSet extends DocIdSet {
        protected final AtomicReaderContext context;
        protected final Bits acceptDocs;
        protected final PostingAndFreq[] postings;
        protected boolean noDocs = false;

        public ExactPhraseDocIdSet(AtomicReaderContext context, Bits acceptDocs) throws IOException {
            this.context = context;
            this.acceptDocs = acceptDocs;

            Terms fieldTerms = context.reader().fields().terms(fieldName);
            // TermContext states[] = new TermContext[terms.size()];
            postings = new PostingAndFreq[terms.size()];

            TermsEnum te = fieldTerms.iterator(null);
            for (int i = 0; i < terms.size(); ++i) {
                final Term t = terms.get(i);
                // states[i] = TermContext.build(context, terms.get(i), true);
                // final TermState state = states[i].get(context.ord);
                if (!te.seekExact(t.bytes(), true)) {
                    noDocs = true;
                    return;
                }
                if (i == 0) {
                    postings[i] = new PostingAndFreq(te.docsAndPositions(acceptDocs, null, 0), te.docFreq(), positions.get(i), false);
                } else {
                    postings[i] = new PostingAndFreq(te.docsAndPositions(acceptDocs, null, 0), te.docFreq(), positions.get(i), te.docFreq() > 5 * postings[0].docFreq);
                }
            }

            ArrayUtil.mergeSort(postings);
            for (int i = 1; i < terms.size(); ++i) {
                postings[i].posEnum.nextDoc();
            }
        }

        @Override
        public DocIdSetIterator iterator() throws IOException
        {
            if (noDocs) {
                return EMPTY_DOCIDSET.iterator();
            } else {
                return new ExactPhraseDocIdSetIterator(context, acceptDocs);
            }
        }

        protected class ExactPhraseDocIdSetIterator extends DocIdSetIterator {
            protected int docID = -1;

            public ExactPhraseDocIdSetIterator(AtomicReaderContext context, Bits acceptDocs) throws IOException {
            }

            @Override
            public int nextDoc() throws IOException {
                while (true) {
                    // first (rarest) term
                    final int doc = postings[0].posEnum.nextDoc();
                    if (doc == DocIdSetIterator.NO_MORE_DOCS) {
                        // System.err.println("END");
                        return docID = doc;
                    }

                    // non-first terms
                    int i = 1;
                    while (i < postings.length) {
                        final PostingAndFreq pf = postings[i];
                        int doc2 = pf.posEnum.docID();
                        if (pf.useAdvance) {
                            if (doc2 < doc) {
                                doc2 = pf.posEnum.advance(doc);
                            }
                        } else {
                            int iter = 0;
                            while (doc2 < doc) {
                                if (++iter == 50) {
                                    doc2 = pf.posEnum.advance(doc);
                                } else {
                                    doc2 = pf.posEnum.nextDoc();
                                }
                            }
                        }
                        if (doc2 > doc) {
                            break;
                        }
                        ++i;
                    }

                    if (i == postings.length) {
                        // System.err.println(doc);
                        docID = doc;
                        // return docID;
                        if (containsPhrase()) {
                            return docID;
                        }
                    }
                }
            }

            @Override
            public int advance(int target) throws IOException {
                throw new IOException();
            }

            private boolean containsPhrase() throws IOException {
                int index = -1;
                int i = 0;
                PostingAndFreq pf;

                // init.
                for (i = 0; i < postings.length; ++i) {
                    postings[i].posFreq = postings[i].posEnum.freq();
                    postings[i].pos = postings[i].posEnum.nextPosition() - postings[i].position;
                    postings[i].posTime = 1;
                }

                while (true) {
                    pf = postings[0];

                    // first term.
                    while (pf.pos < index && pf.posTime < pf.posFreq) {
                        pf.pos = pf.posEnum.nextPosition() - pf.position;
                        ++pf.posTime;
                    }
                    if (pf.pos >= index) {
                        index = pf.pos;
                    } else if (pf.posTime == pf.posFreq) {
                        return false;
                    }

                    // other terms.
                    for (i = 1; i < postings.length; ++i) {
                        pf = postings[i];
                        while (pf.pos < index && pf.posTime < pf.posFreq) {
                            pf.pos = pf.posEnum.nextPosition() - pf.position;
                            ++pf.posTime;
                        }
                        if (pf.pos > index) {
                            index = pf.pos;
                            break;
                        }
                        if (pf.pos == index) {
                            continue;
                        }
                        if (pf.posTime == pf.posFreq) {
                            return false;
                        }
                    }
                    if (i == postings.length) {
                        return true;
                    }
                }
            }

            @Override
            public int docID()
            {
                return docID;
            }
        }

    }

}

UT:

import java.io.IOException;

import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.standard.StandardAnalyzer;

import org.apache.lucene.codecs.Codec;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.TextField;
import org.apache.lucene.document.Field.Store;
import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.IndexWriterConfig;
import org.apache.lucene.index.Term;
import org.apache.lucene.index.IndexWriterConfig.OpenMode;
import org.apache.lucene.search.ConstantScoreQuery;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.TopDocs;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.RAMDirectory;
import org.apache.lucene.util.Version;
import org.testng.annotations.AfterTest;
import org.testng.annotations.BeforeTest;
import org.testng.annotations.Test;

import com.dp.arts.lucenex.codec.Dp10Codec;

public class ExactPhraseFilterTest
{
    final Directory dir = new RAMDirectory();

    @BeforeTest
    public void setUp() throws IOException {
        Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_40);
        IndexWriterConfig iwc = new IndexWriterConfig(Version.LUCENE_40, analyzer);
        iwc.setOpenMode(OpenMode.CREATE);
        iwc.setCodec(Codec.forName(Dp10Codec.DP10_CODEC_NAME));

        IndexWriter writer = new IndexWriter(dir, iwc);
        addDocument(writer, "新疆烧烤");  // 0
        addDocument(writer, "啤酒");  // 1
        addDocument(writer, "烤烧");  // 2
        addDocument(writer, "烧烧烧");  // 3
        addDocument(writer, "烤烧中华烧烤"); // 4
        writer.close();
    }

    private void addDocument(IndexWriter writer, String str) throws IOException {
        Document doc = new Document();
        doc.add(new TextField("searchkeywords", str, Store.YES));
        writer.addDocument(doc, new StandardAnalyzer(Version.LUCENE_40));
    }

    @AfterTest
    public void tearDown() throws IOException
    {
        this.dir.close();
    }

    @Test
    public void test1() throws IOException
    {
        IndexReader reader = DirectoryReader.open(dir);
        IndexSearcher searcher = new IndexSearcher(reader);

        ExactPhraseFilter pf = new ExactPhraseFilter();
        pf.add(new Term("searchkeywords", "烧"));
        pf.add(new Term("searchkeywords", "烤"));
        Query query = new ConstantScoreQuery(pf);
        TopDocs results = searcher.search(query, 20);

        assert results.totalHits == 2;
        assert results.scoreDocs[0].doc == 0;
        assert results.scoreDocs[1].doc == 4;

        searcher.getIndexReader().close();
    }
}

关于使用Filter减少Lucene tf idf打分计算的调研,码迷,mamicode.com

时间: 2024-10-10 19:55:27

关于使用Filter减少Lucene tf idf打分计算的调研的相关文章

使用sklearn进行中文文本的tf idf计算

Created by yinhongyu at 2018-4-28 email: [email protected] 使用jieba和sklearn实现了tf idf的计算 import jieba import jieba.posseg as pseg from sklearn import feature_extraction from sklearn.feature_extraction.text import TfidfTransformer from sklearn.feature_e

25.TF&IDF算法以及向量空间模型算法

主要知识点: boolean model IF/IDF vector space model 一.boolean model 在es做各种搜索进行打分排序时,会先用boolean model 进行初步的筛选,boolean model类似and这种逻辑操作符,先过滤出包含指定term的doc.must/must not/should(过滤.包含.不包含 .可能包含)这几种情况,这一步不会对各个doc进行打分,只分过滤,为下一步的IF/IDF算法筛选数据. 二.TF/IDF 这一步就是es为boo

文本分类学习(三) 特征权重(TF/IDF)和特征提取

上一篇中,主要说的就是词袋模型.回顾一下,在进行文本分类之前,我们需要把待分类文本先用词袋模型进行文本表示.首先是将训练集中的所有单词经过去停用词之后组合成一个词袋,或者叫做字典,实际上一个维度很大的向量.这样每个文本在分词之后,就可以根据我们之前得到的词袋,构造成一个向量,词袋中有多少个词,那这个向量就是多少维度的了.然后就把这些向量交给计算机去计算,而不再需要文本啦.而向量中的数字表示的是每个词所代表的权重.代表这个词对文本类型的影响程度. 在这个过程中我们需要解决两个问题:1.如何计算出适

tf–idf算法解释及其python代码实现(下)

tf–idf算法python代码实现 这是我写的一个tf-idf的核心部分的代码,没有完整实现,当然剩下的事情就非常简单了,我们知道tfidf=tf*idf,所以可以分别计算tf和idf值在相乘,首先我们创建一个简单的语料库,作为例子,只有四句话,每句表示一个文档 copus=['我正在学习计算机','它正在吃饭','我的书还在你那儿','今天不上班'] 由于中文需要分词,jieba分词是python里面比较好用的分词工具,所以选用jieba分词,文末是jieba的链接.首先对文档进行分词: i

[Elasticsearch] 控制相关度 (四) - 忽略TF/IDF

本章翻译自Elasticsearch官方指南的Controlling Relevance一章. 忽略TF/IDF 有时我们不需要TF/IDF.我们想知道的只是一个特定的单词是否出现在了字段中.比如我们正在搜索度假酒店,希望它拥有的卖点越多越好: WiFi 花园(Garden) 泳池(Pool) 而关于度假酒店的文档类似下面这样: { "description": "A delightful four-bedroomed house with ... " } 可以使用

tf–idf算法解释及其python代码实现(上)

tf–idf算法解释 tf–idf, 是term frequency–inverse document frequency的缩写,它通常用来衡量一个词对在一个语料库中对它所在的文档有多重要,常用在信息检索和文本挖掘中. 一个很自然的想法是在一篇文档中词频越高的词对这篇文档越重要,但同时如果这个词又在非常多的文档中出现的话可能就是很普通的词,没有多少信息,对所在文档贡献不大,例如‘的’这种停用词.所以要综合一个词在所在文档出现次数以及有多少篇文档包含这个词,如果一个词在所在文档出现次数很多同时整个

Elasticsearch学习之相关度评分TF&amp;IDF

relevance score算法,简单来说,就是计算出,一个索引中的文本,与搜索文本,他们之间的关联匹配程度 Elasticsearch使用的是 term frequency/inverse document frequency算法,简称为TF/IDF算法 Term frequency(TF):搜索文本中的各个词条在field文本中出现了多少次,出现次数越多,就越相关 Inverse document frequency(IDF):搜索文本中的各个词条在整个索引的所有文档中出现了多少次,出现的

使用solr的函数查询,并获取tf*idf值

1. 使用函数df(field,keyword) 和idf(field,keyword). http://118.85.207.11:11100/solr/mobile/select?q={!func}product%28idf%28title,%E9%97%AE%E9%A2%98%29,tf%28title,%E9%97%AE%E9%A2%98%29%29&fl=title,score,product%28idf%28title,%E9%97%AE%E9%A2%98%29,tf%28title

55.TF/IDF算法

主要知识点: TF/IDF算法介绍 查看es计算_source的过程及各词条的分数 查看一个document是如何被匹配到的 一.算法介绍 relevance score算法,简单来说,就是计算出,一个索引中的文本,与搜索文本,他们之间的关联匹配程度.Elasticsearch使用的是 term frequency/inverse document frequency算法,简称为TF/IDF算法 1.Term frequency 搜索文本中的各个词条在field文本中出现了多少次,出现次数越多,