FP树与python实现

FP-growth算法可以高效的发现频繁项集,但是该算法不能去发现关联规则,FP-growth算法 只需要对数据库进行两次扫描,一般情况下其算法效率高于Apriori算法两个数量级。

一颗FP树是如下图1所示:

跟别的树没什么区别,只是增加了相似节点的链接。

FP树的定义:

class treeNode :
    def __init__(self,nameValue,numOccur,parentNode):
        self.name = nameValue
        self.count = numOccur
        self.nodeLink = None
        self.parent = parentNode
        self.children = {}
    def inc (self,numOccur):
        self.count += numOccur

    def disp(self,ind = 1):
        print ' '*ind,self.name,'  ',self.count
        for child in self.children.values():
            child.disp(ind+1)

disp()函数主要是以文本显示出树的结构。在实现中,我们需要一个头指针表来指向给定类型第一个实例,如下图2:

这个算法核心部分就是建立FP树,下面是建树代码:

def createTree(dataSet, minSup=1): #create FP-tree from dataset but don't mine
    headerTable = {}
    #go over dataSet twice
    for trans in dataSet:#first pass counts frequency of occurance
        for item in trans:
            headerTable[item] = headerTable.get(item, 0) + dataSet[trans]
    for k in headerTable.keys():  #remove items not meeting minSup
        if headerTable[k] < minSup:
            del(headerTable[k])
    freqItemSet = set(headerTable.keys())
    #print 'freqItemSet: ',freqItemSet
    if len(freqItemSet) == 0: return None, None  #if no items meet min support -->get out
    for k in headerTable:
        headerTable[k] = [headerTable[k], None] #reformat headerTable to use Node link
    #print 'headerTable: ',headerTable
    retTree = treeNode('Null Set', 1, None) #create tree
    for tranSet, count in dataSet.items():  #go through dataset 2nd time
        localD = {}
        for item in tranSet:  #put transaction items in order
            if item in freqItemSet:
                localD[item] = headerTable[item][0]
        if len(localD) > 0:
            orderedItems = [v[0] for v in sorted(localD.items(), key=lambda p: p[1], reverse=True)]
            updateTree(orderedItems, retTree, headerTable, count)#populate tree with ordered freq itemset
    return retTree, headerTable #return tree and header table

def updateTree(items, inTree, headerTable, count):
    if items[0] in inTree.children:#check if orderedItems[0] in retTree.children
        inTree.children[items[0]].inc(count) #incrament count
    else:   #add items[0] to inTree.children
        inTree.children[items[0]] = treeNode(items[0], count, inTree)
        if headerTable[items[0]][1] == None: #update header table
            headerTable[items[0]][1] = inTree.children[items[0]]
        else:
            updateHeader(headerTable[items[0]][1], inTree.children[items[0]])
    if len(items) > 1:#call updateTree() with remaining ordered items
        updateTree(items[1::], inTree.children[items[0]], headerTable, count)

def updateHeader(nodeToTest, targetNode):   #this version does not use recursion
    while (nodeToTest.nodeLink != None):    #Do not use recursion to traverse a linked list!
        nodeToTest = nodeToTest.nodeLink
    nodeToTest.nodeLink = targetNode

参考着图2 我们可以比较清楚理解建树过程:headerTable就是头指针表,维护这张表是为了后面发现频繁项集中用到做准备。这里有一些实现细节东西,我们对所有的元素项先进行计数,如果不满足最低支持度,直接删掉不用加入FP树中。updateTree()函数时更新树,updateHeader()是维护headerTable头指针表 。

从一棵FP树中挖掘频繁项集:

class treeNode :
    def __init__(self,nameValue,numOccur,parentNode):
        self.name = nameValue
        self.count = numOccur
        self.nodeLink = None
        self.parent = parentNode
        self.children = {}
    def inc (self,numOccur):
        self.count += numOccur

    def disp(self,ind = 1):
        print ' '*ind,self.name,'  ',self.count
        for child in self.children.values():
            child.disp(ind+1)

def createTree(dataSet,minSup=1):
    headerTable = {}
    for trans in dataSet:
        for item in trans:
            headerTable[item] = headerTable.get(item,0)+ dataSet[trans]
    for k in headerTable.keys():
        if headerTable[k] < minSup:
            del(headerTable[k])
    freqItemSet = set(headerTable.keys())
    if len(freqItemSet) == 0 : return None,None
    for k in headerTable:
        headerTable[k] = [headerTable[k],None]
    retTree = treeNode('Null Set',1,None)
    for tranSet ,count in dataSet.items():
        localD = {}
        for item in tranSet:
            if item in freqItemSet:
                localD[item] = headerTable[item][0]
        if len(localD) > 0:
            orderedItems = [v[0] for v in sorted(localD.items(),key = lambda p:p[1],reverse = True)]
            updateTree(orderedItems,retTree,headerTable,count)
    return retTree,headerTable

def updateTree(items,inTree,headerTable,count):
    if items[0] in inTree.children:
        inTree.children[items[0]].inc(count)
    else:
        inTree.children[items[0]] = treeNode(items[0],count,inTree)
        if headerTable[items[0]][1] ==None:
            headerTable[items[0]][1] = inTree.children[items[0]]
        else:
            updateHeader(headerTable[items[0]][1], inTree.children[items[0]])
    if len(items) > 1:
        updateTree(items[1::], inTree.children[items[0]], headerTable, count)

def updateHeader(nodeToTest,targetNode):
    while (nodeToTest.nodeLink !=  None):
        nodeToTest = nodeToTest.nodeLink
    nodeToTest.nodeLink = targetNode

def loadSimpDat():
    simpDat = [['r', 'z', 'h', 'j', 'p'],
               ['z', 'y', 'x', 'w', 'v', 'u', 't', 's'],
               ['z'],
               ['r', 'x', 'n', 'o', 's'],
               ['y', 'r', 'x', 'z', 'q', 't', 'p'],
               ['y', 'z', 'x', 'e', 'q', 's', 't', 'm']]
    return simpDat

def createInitSet(dataSet):
    retDict = {}
    for trans in dataSet:
        retDict[frozenset(trans)] = 1
    return retDict

def ascendTree(leafNode,prefixPath):
    if leafNode.parent !=None:
        prefixPath.append(leafNode.name)
        ascendTree(leafNode.parent, prefixPath)

def findPrefixPath(basePat,treeNode):
    condPats={}
    while treeNode != None:
        prefixPath = []
        ascendTree(treeNode, prefixPath)
        if len(prefixPath) > 1:
            condPats[frozenset(prefixPath[1:])] = treeNode.count
        treeNode = treeNode.nodeLink
    return condPats

def mineTree(inTree,headerTable,minSup,preFix,freqItemList):
    bigL = [v[0] for v in sorted(headerTable.items(),key=lambda p:p[1])]
    for basePat in bigL:
        newFreqSet = preFix.copy()
        newFreqSet.add(basePat)
        freqItemList.append(newFreqSet)
        condPattBases = findPrefixPath(basePat, headerTable[basePat][1])
        myCondTree,myHead = createTree(condPattBases, minSup)
        if myHead!= None:
            print 'conditional tree for: ',newFreqSet
            myCondTree.disp(1)
            mineTree(myCondTree, myHead, minSup, newFreqSet, freqItemList)

if __name__ == "__main__":
    simpDat = loadSimpDat()
    print simpDat
    initSet = createInitSet(simpDat)
    print initSet
    myFPtree , myHeaderTab = createTree(initSet, 3)
    myFPtree.disp()
    print findPrefixPath('t', myHeaderTab['t'][1])
    freqItems = []
    mineTree(myFPtree, myHeaderTab, 3, set([]), freqItems)
    print freqItems
    

输出结果:

[['r', 'z', 'h', 'j', 'p'], ['z', 'y', 'x', 'w', 'v', 'u', 't', 's'], ['z'], ['r', 'x', 'n', 'o', 's'], ['y', 'r', 'x', 'z', 'q', 't', 'p'], ['y', 'z', 'x', 'e', 'q', 's', 't', 'm']]
{frozenset(['e', 'm', 'q', 's', 't', 'y', 'x', 'z']): 1, frozenset(['x', 's', 'r', 'o', 'n']): 1, frozenset(['s', 'u', 't', 'w', 'v', 'y', 'x', 'z']): 1, frozenset(['q', 'p', 'r', 't', 'y', 'x', 'z']): 1, frozenset(['h', 'r', 'z', 'p', 'j']): 1, frozenset(['z']): 1}
  Null Set    1
   x    1
    s    1
     r    1
   z    5
    x    3
     y    3
      s    2
       t    2
      r    1
       t    1
    r    1
{frozenset(['y', 'x', 's', 'z']): 2, frozenset(['y', 'x', 'r', 'z']): 1}
conditional tree for:  set(['y'])
  Null Set    1
   x    3
    z    3
conditional tree for:  set(['y', 'z'])
  Null Set    1
   x    3
conditional tree for:  set(['s'])
  Null Set    1
   x    3
conditional tree for:  set(['t'])
  Null Set    1
   y    3
    x    3
     z    3
conditional tree for:  set(['x', 't'])
  Null Set    1
   y    3
conditional tree for:  set(['z', 't'])
  Null Set    1
   y    3
    x    3
conditional tree for:  set(['x', 'z', 't'])
  Null Set    1
   y    3
conditional tree for:  set(['x'])
  Null Set    1
   z    3
[set(['y']), set(['y', 'x']), set(['y', 'z']), set(['y', 'x', 'z']), set(['s']), set(['x', 's']), set(['t']), set(['y', 't']), set(['x', 't']), set(['y', 'x', 't']), set(['z', 't']), set(['y', 'z', 't']), set(['x', 'z', 't']), set(['y', 'x', 'z', 't']), set(['r']), set(['x']), set(['x', 'z']), set(['z'])]
时间: 2025-01-01 01:53:21

FP树与python实现的相关文章

FP-growth算法(一)——通过构建FP树发现频繁项集

常见的挖掘频繁项集算法有两类,一类是Apriori算法,另一类是FP-growth.Apriori通过不断的构造候选集.筛选候选集挖掘出频繁项集,需要多次扫描原始数据,当原始数据较大时,磁盘I/O次数太多,效率比较低下.FPGrowth不同于Apriori的"试探"策略,算法只需扫描原始数据两遍,通过FP-tree数据结构对原始数据进行压缩,效率较高. FP代表频繁模式(Frequent Pattern) ,算法主要分为两个步骤:FP-tree构建.挖掘频繁项集. FP树表示法 FP树

FP树(附)

Apriori算法和FPTree算法都是数据挖掘中的关联规则挖掘算法,处理的都是最简单的单层单维布尔关联规则. 转自http://blog.csdn.net/sealyao/article/details/6460578 Apriori算法 Apriori算法是一种最有影响的挖掘布尔关联规则频繁项集的算法.是基于这样的事实:算法使用频繁项集性质的先验知识.Apriori使用一种称作逐层搜索的迭代方法,k-项集用于探索(k+1)-项集.首先,找出频繁1-项集的集合.该集合记作L1.L1用于找频繁2

AVL树插入(Python实现)

建立AVL树 1 class AVLNode(object): 2 def __init__(self,data): 3 self.data = data 4 self.lchild = None 5 self.rchild = None 6 self.parent = None 7 self.bf = 0 8 9 class AVLTree(object) 10 def __init__(self,li=None) 11 self.root = None 12 if li: 13 for va

leetcode434 字符串中的单词树(python)

统计字符串中的单词个数,这里的单词指的是连续的不是空格的字符. 请注意,你可以假定字符串里不包括任何不可打印的字符. 示例: 输入: "Hello, my name is John"输出: 5 class Solution(object): def countSegments(self, s): """ :type s: str :rtype: int """ s = s.split() return len(s) split

机器学习经典算法详解及Python实现--CART分类决策树、回归树和模型树

摘要: Classification And Regression Tree(CART)是一种很重要的机器学习算法,既可以用于创建分类树(Classification Tree),也可以用于创建回归树(Regression Tree),本文介绍了CART用于离散标签分类决策和连续特征回归时的原理.决策树创建过程分析了信息混乱度度量Gini指数.连续和离散特征的特殊处理.连续和离散特征共存时函数的特殊处理和后剪枝:用于回归时则介绍了回归树和模型树的原理.适用场景和创建过程.个人认为,回归树和模型树

python kd树 搜索

kd树就是一种对k维空间中的实例点进行存储以便对其进行快速检索的树形数据结构,可以运用在k近邻法中,实现快速k近邻搜索.构造kd树相当于不断地用垂直于坐标轴的超平面将k维空间切分,依次选择坐标轴对空间进行切分,选择训练实例点在选定坐标轴上的中位数为切分点.具体kd树的原理可以参考kd树的原理. 代码是参考<统计学习方法>k近邻 kd树的python实现得到 首先创建一个类,用于表示树的节点,包括:该节点的值,该节点的切分轴,左子树,右子树 class decisionnode: def __i

使用Apriori算法和FP-growth算法进行关联分析(Python版)

===================================================================== <机器学习实战>系列博客是博主阅读<机器学习实战>这本书的笔记也包含一些其他python实现的机器学习算法 算法实现均采用python github 源码同步:https://github.com/Thinkgamer/Machine-Learning-With-Python ==================================

python基础之面对对象

Python3 面向对象 Python从设计之初就已经是一门面向对象的语言,正因为如此,在Python中创建一个类和对象是很容易的.本章节我们将详细介绍Python的面向对象编程. 如果你以前没有接触过面向对象的编程语言,那你可能需要先了解一些面向对象语言的一些基本特征,在头脑里头形成一个基本的面向对象的概念,这样有助于你更容易的学习Python的面向对象编程. 接下来我们先来简单的了解下面向对象的一些基本特征. 面向对象编程--Object Oriented Programming,简称OOP

FP-growth算法思想和其python实现

第十二章 使用FP-growth算法高效的发现频繁项集 一.导语 FP-growth算法是用于发现频繁项集的算法,它不能够用于发现关联规则.FP-growth算法的特殊之处在于它是通过构建一棵Fp树,然后从FP树上发现频繁项集. FP-growth算法它比Apriori算法的速度更快,一般能够提高两个数量级,因为它只需要遍历两遍数据库,它的过程分为两步: 1.构建FP树 2.利用FP树发现频繁项集 二.FP树 FP树它的形状与普通的树类似,树中的节点记录了一个项和在此路径上该项出现的频率.FP树