Awesome Random Forest

Awesome Random Forest

Random Forest - a curated list of resources regarding tree-based methods and more, including but not limited to random forest, bagging and boosting.

Contributing

Please feel free to pull requests, email Jung Kwon Lee ([email protected]) or join our chats to add links.

Table of Contents

Codes

Theory

Lectures

Books

Papers

  • Global Refinement of Random Forest [Paper]

    • Shaoqing Ren, Xudong Cao, Yichen Wei, Jian Sun, Global Refinement of Random Forest, CVPR 2015
  • Feature-Budgeted Random Forest [Paper] [Supp]
    • Feng Nan, Joseph Wang, Venkatesh Saligrama, Feature-Budgeted Random Forest, ICML 2015
  • Bayesian Forests [Paper]
    • Taddy Matthew, Chun-Sheng Chen, Jun Yu, Mitch Wyle, Bayesian and Empirical Bayesian Forests, ICML 2015
  • Mondrian Forests: Efficient Online Random Forests [Paper] [Code][Slides]
    • Balaji Lakshminarayanan, Daniel M. Roy and Yee Whye Teh, Mondrian Forests: Efficient Online Random Forests, NIPS 2014
  • Consistency of random forests [Paper]
    • Scornet, E., Biau, G. and Vert, J.-P. (2015). Consistency of random forests, The Annals of Statistics, in press.
  • On the asymptotics of random forests [Paper]
    • Scornet, E. (2015). On the asymptotics of random forests, Journal of Multivariate Analysis, in press.
  • Random Forests In Theory and In Practice [Paper]
    • Misha Denil, David Matheson, Nando de Freitas, Narrowing the Gap: Random Forests In Theory and In Practice, ICML 2014
  • Decision Jungles [Paper]
    • Jamie Shotton, Toby Sharp, Pushmeet Kohli, Sebastian Nowozin, John Winn, and Antonio Criminisi, Decision Jungles: Compact and Rich Models for Classification, NIPS 2013
  • Semi-supervised Node Splitting for Random Forest Construction [Paper]
    • Xiao Liu, Mingli Song, Dacheng Tao, Zicheng Liu, Luming Zhang, Chun Chen and Jiajun Bu, Semi-supervised Node Splitting for Random Forest Construction, CVPR 2013
  • Improved Information Gain Estimates for Decision Tree Induction [Paper]
    • Sebastian Nowozin, Improved Information Gain Estimates for Decision Tree Induction, ICML 2012
  • MIForests: Multiple-Instance Learning with Randomized Trees [Paper] [Code]
    • Christian Leistner, Amir Saffari, and Horst Bischof, MIForests: Multiple-Instance Learning with Randomized Trees, ECCV 2010

Thesis

  • Understanding Random Forests

    • PhD dissertation, Gilles Louppe, July 2014. Defended on October 9, 2014.
    • [Repository] with thesis and related codes

Applications

Image classification

  • ETH Zurich [Paper-CVPR15] [Paper-CVPR14] [Paper-ECCV]

    • Marko Ristin, Juergen Gall, Matthieu Guillaumin, and Luc Van Gool, From Categories to Subcategories: Large-scale Image Classification with Partial Class Label Refinement, CVPR 2015
    • Marko Ristin, Matthieu Guillaumin, Juergen Gall, and Luc Van Gool, Incremental Learning of NCM Forests for Large-Scale Image Classification, CVPR 2014
    • Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool, Food-101 – Mining Discriminative Components with Random Forests, ECCV 2014
  • University of Girona & University of Oxford [Paper]
    • Anna Bosch, Andrew Zisserman, and Xavier Munoz, Image Classification using Random Forests and Ferns, ICCV 2007

Object Detection

  • Graz University of Technology [Paper-CVPR] [Paper-ICCV]

    • Samuel Schulter, Christian Leistner, Paul Wohlhart, Peter M. Roth, and Horst Bischof, Accurate Object Detection with Joint Classification-Regression Random Forests, CVPR 2014
    • Samuel Schulter, Christian Leistner, Paul Wohlhart, Peter M. Roth, and Horst Bischof, Alternating Regression Forests for Object Detection and Pose Estimation, ICCV 2013
  • ETH Zurich + Microsoft Research Cambridge [Paper]
    • Juergen Gall, and Victor Lempitsky, Class-Specific Hough Forests for Object Detection, CVPR 2009

Object Tracking

  • Technische Universitat Munchen [Paper]

    • David Joseph Tan, and Slobodan Ilic, Multi-Forest Tracker: A Chameleon in Tracking, CVPR 2014
  • ETH Zurich + Leibniz University Hannover + Stanford University [Paper]
    • Laura Leal-Taixe, Michele Fenzi, Alina Kuznetsova, Bodo Rosenhahn, and Silvio Savarese, Learning an image-based motion context for multiple people tracking, CVPR 2014
  • Graz University of Technology [Paper]
    • Martin Godec, Peter M. Roth, and Horst Bischof, Hough-based Tracking of Non-Rigid Objects, ICCV 2011

Edge Detection

  • University of California, Irvine [Paper] [Code]

    • Sam Hallman, and Charless C. Fowlkes, Oriented Edge Forests for Boundary Detection, CVPR 2015
  • Microsoft Research [Paper] [Code]
    • Piotr Dollar, and C. Lawrence Zitnick, Structured Forests for Fast Edge Detection, ICCV 2013
  • Massachusetts Inst. of Technology + Microsoft Research [Paper] [Code]
    • Joseph J. Lim, C. Lawrence Zitnick, and Piotr Dollar, Sketch Tokens: A Learned Mid-level Representation for Contour and Object Detection, CVPR 2013

Semantic Segmentation

  • Fondazione Bruno Kessler, Microsoft Research Cambridge [Paper]

    • Samuel Rota Bulo, and Peter Kontschieder, Neural Decision Forests for Semantic Image Labelling, CVPR 2014
  • INRIA + Microsoft Research Cambridge [Paper]
    • Herve Lombaert, Darko Zikic, Antonio Criminisi, and Nicholas Ayache, Laplacian Forests:Semantic Image Segmentation by Guided Bagging, MICCAI 2014
  • Microsoft Research Cambridge + GE Global Research Center + University of California + Rutgers Univeristy [Paper]
    • Albert Montillo1, Jamie Shotton, John Winn, Juan Eugenio Iglesias, Dimitri Metaxas, and Antonio Criminisi, Entangled Decision Forests and their Application for Semantic Segmentation of CT Images, IPMI 2011
  • University of Cambridge + Toshiba Corporate R&D Center [Paper]
    • Jamie Shotton, Matthew Johnson, and Roberto Cipolla, Semantic Texton Forests for Image Categorization and Segmentation, CVPR 2008

Human / Hand Pose Estimation

  • Microsoft Research Cambridge [Paper-CHI][Video-CHI] [Paper-CVPR]

    • Toby Sharp, Cem Keskin, Duncan Robertson, Jonathan Taylor, Jamie Shotton, David Kim, Christoph Rhemann, Ido Leichter, Alon Vinnikov, Yichen Wei, Daniel Freedman, Pushmeet Kohli, Eyal Krupka, Andrew Fitzgibbon, and Shahram Izadi, Accurate, Robust, and Flexible Real-time Hand Tracking, CHI 2015
    • Jonathan Taylor, Jamie Shotton, Toby Sharp, and Andrew Fitzgibbon, The Vitruvian Manifold:Inferring Dense Correspondences for One-Shot Human Pose Estimation, CVPR 2012
  • Microsoft Research Haifa [Paper]
    • Eyal Krupka, Alon Vinnikov, Ben Klein, Aharon Bar Hillel, and Daniel Freedman, Discriminative Ferns Ensemble for Hand Pose Recognition, CVPR 2014
  • Microsoft Research Asia [Paper]
    • Shaoqing Ren, Xudong Cao, Yichen Wei, and Jian Sun, Face Alignment at 3000 FPS via Regressing Local Binary Features, CVPR 2014
  • Imperial College London [Paper-CVPR-Face] [Paper-CVPR-Hand] [Paper-ICCV]
    • Xiaowei Zhao, Tae-Kyun Kim, and Wenhan Luo, Unified Face Analysis by Iterative Multi-Output Random Forests, CVPR 2014
    • Danhang Tang, Hyung Jin Chang, Alykhan Tejani, and Tae-Kyun Kim, Latent Regression Forest: Structured Estimation of 3D Articulated Hand Posture, CVPR 2014
    • Danhang Tang, Tsz-Ho Yu, and Tae-Kyun Kim, Real-time Articulated Hand Pose Estimation using Semi-supervised Transductive Regression Forests, ICCV 2013
  • ETH Zurich + Microsoft [Paper]
    • Matthias Dantone, Juergen Gall, Christian Leistner, and Luc Van Gool, Human Pose Estimation using Body Parts Dependent Joint Regressors, CVPR 2013

3D localization

  • Imperial College London [Paper]

    • Alykhan Tejani, Danhang Tang, Rigas Kouskouridas, and Tae-Kyun Kim, Latent-Class Hough Forests for 3D Object Detection and Pose Estimation, ECCV 2014
  • Microsoft Research Cambridge + University of Illinois + Imperial College London [Paper]
    • Abner Guzman-Rivera, Pushmeet Kohli, Ben Glocker, Jamie Shotton, Toby Sharp, Andrew Fitzgibbon, and Shahram Izadi, Multi-Output Learning for Camera Relocalization, CVPR 2014
  • Microsoft Research Cambridge [Paper]
    • Jamie Shotton, Ben Glocker, Christopher Zach, Shahram Izadi, Antonio Criminisi, and Andrew Fitzgibbon, Scene Coordinate Regression Forests for Camera Relocalization in RGB-D Images, CVPR 2013

Low-Level vision

  • Super-Resolution

    • Graz University of Technology [Paper]

      • Samuel Schulter, Christian Leistner, and Horst Bischof, Fast and Accurate Image Upscaling with Super-Resolution Forests, CVPR 2015
  • Denoising
    • Microsoft Research + iCub Facility - Istituto Italiano di Tecnologia [Paper]

      • Sean Ryan Fanello, Cem Keskin, Pushmeet Kohli, Shahram Izadi, Jamie Shotton, Antonio Criminisi, Ugo Pattacini, and Tim Paek, Filter Forests for Learning Data-Dependent Convolutional Kernels, CVPR 2014

Maintainers - Jiwon KimJanghoon ChoiJung Kwon Lee

时间: 2024-12-29 01:33:26

Awesome Random Forest的相关文章

Random Forest Classification of Mushrooms

There is a plethora of classification algorithms available to people who have a bit of coding experience and a set of data. A common machine learning method is the random forest, which is a good place to start. This is a use case in R of the randomFo

paper 56 :机器学习中的算法:决策树模型组合之随机森林(Random Forest)

周五的组会如约而至,讨论了一个比较感兴趣的话题,就是使用SVM和随机森林来训练图像,这样的目的就是 在图像特征之间建立内在的联系,这个model的训练,着实需要好好的研究一下,下面是我们需要准备的入门资料: [关于决策树的基础知识参考:http://blog.csdn.net/holybin/article/details/22914417] 在机器学习中,随机森林由许多的决策树组成,因为这些决策树的形成采用了随机的方法,所以叫做随机森林.随机森林中的决策树之间是没有关联的,当测试数据进入随机森

多分类问题中,实现不同分类区域颜色填充的MATLAB代码(demo:Random Forest)

之前建立了一个SVM-based Ordinal regression模型,一种特殊的多分类模型,就想通过可视化的方式展示模型分类的效果,对各个分类区域用不同颜色表示.可是,也看了很多代码,但基本都是展示二分类,当扩展成多分类时就会出现问题,所以我的论文最后就只好画了boundary的图了.今天在研究Random Forest时,找到了下面的demo的MATLAB代码,该代码很好的实现了各分类区域的颜色填充,效果非常漂亮. 下面是一个Demo代码:Demo.m %% generate data

ML(4.3): R Random Forest

随机森林模型是一种数据挖掘模型,常用于进行分类预测.随机森林模型包含多个树形分类器,预测结果由多个分类器投票得出. 决策树相当于一个大师,通过自己在数据集中学到的知识对于新的数据进行分类.俗话说得好,一个诸葛亮,玩不过三个臭皮匠.随机森林就是希望构建多个臭皮匠,希望最终的分类效果能够超过单个大师的一种算法. 参考资料: https://zhuanlan.zhihu.com/p/24416833 http://www.doc88.com/p-3436627023327.html http://bl

【Random Forest】林轩田机器学习技法

总体来说,林对于random forest的讲解主要是算法概况上的:某种程度上说,更注重insights. 林分别列举了Bagging和Decision Tree的各自特点: Random Forest就是这二者的结合体. 1)便于并行化 2)保留了C&RT的优势 3)通过bagging的方法削弱了fully-grown tree的缺点 这里提到一个insights:如果各个分类器的diversity越大,aggregation之后的效果可能就越好. 因此,Random Forest不仅样本是b

Random Forest 与 GBDT 的异同

曾经在看用RF和GBDT的时候,以为是非常相似的两个算法,都是属于集成算法,可是细致研究之后,发现他们根本全然不同. 以下总结基本的一些不同点 Random Forest: bagging (你懂得.原本叫Bootstrap aggregating) Recall that the key to bagging is that trees are repeatedly fit to bootstrapped subsets of the observations. One can show th

随机森林(Random Forest)详解(转)

来源: Poll的笔记 cnblogs.com/maybe2030/p/4585705.html 1 什么是随机森林?   作为新兴起的.高度灵活的一种机器学习算法,随机森林(Random Forest,简称RF)拥有广泛的应用前景,从市场营销到医疗保健保险,既可以用来做市场营销模拟的建模,统计客户来源,保留和流失,也可用来预测疾病的风险和病患者的易感性.最初,我是在参加校外竞赛时接触到随机森林算法的.最近几年的国内外大赛,包括2013年百度校园电影推荐系统大赛.2014年阿里巴巴天池大数据竞赛

【机器学习】随机森林 Random Forest 得到模型后,评估参数重要性

在得出random forest 模型后,评估参数重要性 importance() 示例如下 特征重要性评价标准 %IncMSE 是 increase in MSE.就是对每一个变量 比如 X1 随机赋值, 如果 X1重要的话, 预测的误差会增大,所以 误差的增加就等同于准确性的减少,所以MeanDecreaseAccuracy 是一个概念的. IncNodePurity 也是一样, 如果是回归的话, node purity 其实就是 RSS(残差平方和residual sum of squar

统计学习方法——CART, Bagging, Random Forest, Boosting

http://blog.csdn.net/abcjennifer/article/details/8164315 本文从统计学角度讲解了CART(Classification And Regression Tree), Bagging(bootstrap aggregation), Random Forest Boosting四种分类器的特点与分类方法,参考材料为密歇根大学Ji Zhu的pdf与组会上王博的讲解. CART(Classification And Regression Tree)