转载——如何选择机器学习算法

Choosing a Machine Learning Classifier

How do you know what machine learning algorithm to choose for your classification problem? Of course, if you really care about accuracy, your best bet is to test out a couple different ones (making sure to try different parameters within each algorithm as well), and select the best one by cross-validation. But if you’re simply looking for a “good enough” algorithm for your problem, or a place to start, here are some general guidelines I’ve found to work well over the years.

How large is your training set?

If your training set is small, high bias/low variance classifiers (e.g., Naive Bayes) have an advantage over low bias/high variance classifiers (e.g., kNN), since the latter will overfit. But low bias/high variance classifiers start to win out as your training set grows (they have lower asymptotic error), since high bias classifiers aren’t powerful enough to provide accurate models.

You can also think of this as a generative model vs. discriminative model distinction.

Advantages of some particular algorithms

Advantages of Naive Bayes: Super simple, you’re just doing a bunch of counts. If the NB conditional independence assumption actually holds, a Naive Bayes classifier will converge quicker than discriminative models like logistic regression, so you need less training data. And even if the NB assumption doesn’t hold, a NB classifier still often does a great job in practice. A good bet if  want something fast and easy that performs pretty well. Its main disadvantage is that it can’t learn interactions between features (e.g., it can’t learn that although you love movies with Brad Pitt and Tom Cruise, you hate movies where they’re together).

Advantages of Logistic Regression: Lots of ways to regularize your model, and you don’t have to worry as much about your features being correlated, like you do in Naive Bayes. You also have a nice probabilistic interpretation, unlike decision trees or SVMs, and you can easily update your model to take in new data (using an online gradient descent method), again unlike decision trees or SVMs. Use it if you want a probabilistic framework (e.g., to easily adjust classification thresholds, to say when you’re unsure, or to get confidence intervals) or if you expect to receive more training data in the future that you want to be able to quickly incorporate into your model.

Advantages of Decision Trees: Easy to interpret and explain (for some people – I’m not sure I fall into this camp). They easily handle feature interactions and they’re non-parametric, so you don’t have to worry about outliers or whether the data is linearly separable (e.g., decision trees easily take care of cases where you have class A at the low end of some feature x, class B in the mid-range of feature x, and A again at the high end). One disadvantage is that they don’t support online learning, so you have to rebuild your tree when new examples come on. Another disadvantage is that they easily overfit, but that’s where ensemble methods like random forests (or boosted trees) come in. Plus, random forests are often the winner for lots of problems in classification (usually slightly ahead of SVMs, I believe), they’re fast and scalable, and you don’t have to worry about tuning a bunch of parameters like you do with SVMs, so they seem to be quite popular these days.

Advantages of SVMs: High accuracy, nice theoretical guarantees regarding overfitting, and with an appropriate kernel they can work well even if you’re data isn’t linearly separable in the base feature space. Especially popular in text classification problems where very high-dimensional spaces are the norm. Memory-intensive, hard to interpret, and kind of annoying to run and tune, though, so I think random forests are starting to steal the crown.

But…

Recall, though, that better data often beats better algorithms, and designing good features goes a long way. And if you have a huge dataset, then whichever classification algorithm you use might not matter so much in terms of classification performance (so choose your algorithm based on speed or ease of use instead).

And to reiterate what I said above, if you really care about accuracy, you should definitely try a bunch of different classifiers and select the best one by cross-validation. Or, to take a lesson from the Netflix Prize (and Middle Earth), just use an ensemble method to choose them all.

中英版:http://www.52ml.net/15063.html

时间: 2024-10-07 06:09:54

转载——如何选择机器学习算法的相关文章

如何选择机器学习算法 转

原文:http://www.52ml.net/15063.html 如何选择机器学习算法 2014年5月7日机器学习smallroof How do you know what machine learning algorithm to choose for your classification problem? Of course, if you really care about accuracy, your best bet is to test out a couple differe

【机器学习基础】机器学习算法的分类——关于如何选择机器学习算法和适用解决的问题

引子 系统的学习机器学习课程让我觉得受益匪浅,有些基础问题的认识我觉得是非常有必要的,比如机器学习算法的类别. 为什么这么说呢?我承认,作为初学者,可能无法在初期对一个学习的对象有全面而清晰的理解和审视,但是,对一些关键概念有一个初步并且较为清晰的认识,有助于让我们把握对问题的认识层次,说白了,就是帮助我们有目的的去学习心得知识,带着问题去学习,充满对解决问题的动力去实验,我觉得这种方式是有益并且良性的. 之前,我遇到过很多这方面的问题,可能出于对问题分析不够,在寻找解决的问题的方法或者模型的时

如何选择机器学习算法

本文转自http://www.52ml.net/15063.html 倘若你只是想针对你的问题寻找一个“足够好”的算法,或者一个起步点,这里给出了一些还不错的常规指南. 1.训练集大小 如果是小训练集,高偏差/低方差的分类器(比如朴素贝叶斯)要比低偏差/高方差的分类器(比如k最近邻)具有优势,因为后者容易过拟合.然而随着训练集的增大,低偏差/高方差的分类器将开始具有优势(它们拥有更低的渐近误差),因为高偏差分类器对于提供准确模型不那么给力. 高方差和高偏差的一个解释:高方差就是测试误差远远小于训

[转载]简单易学的机器学习算法-决策树之ID3算的

一.决策树分类算法概述 决策树算法是从数据的属性(或者特征)出发,以属性作为基础,划分不同的类.例如对于如下数据集 (数据集) 其中,第一列和第二列为属性(特征),最后一列为类别标签,1表示是,0表示否.决策树算法的思想是基于属性对数据分类,对于以上的数据我们可以得到以下的决策树模型 (决策树模型) 先是根据第一个属性将一部份数据区分开,再根据第二个属性将剩余的区分开. 实现决策树的算法有很多种,有ID3.C4.5和CART等算法.下面我们介绍ID3算法. 二.ID3算法的概述 ID3算法是由Q

机器学习算法需要注意的一些问题

对于机器学习的实际运用,光停留在知道了解的层面还不够,我们需要对实际中容易遇到的一些问题进行深入的挖掘理解.我打算将一些琐碎的知识点做一个整理. 1 数据不平衡问题 这个问题是经常遇到的.就拿有监督的学习的二分类问题来说吧,我们需要正例和负例样本的标注.如果我们拿到的训练数据正例很少负例很多,那么直接拿来做分类肯定是不行的.通常需要做以下方案处理: 1.1 数据集角度 通过调整数据集中正负样本的比例来解决数据不平衡,方法有: 1.1.1 增加正样本数量 正样本本来就少,怎么增加呢?方法是直接复制

机器学习算法-梯度树提升GTB(GBRT)

Introduction 决策树这种算法有着很多良好的特性,比如说训练时间复杂度较低,预测的过程比较快速,模型容易展示(容易将得到的决策树做成图片展示出来)等.但是同时,单决策树又有一些不好的地方,比如说容易over-fitting,虽然有一些方法,如剪枝可以减少这种情况,但是还是不太理想. 模型组合(比如说有Boosting,Bagging等)与决策树相关的算法比较多,如randomForest.Adaboost.GBRT等,这些算法最终的结果是生成N(可能会有几百棵以上)棵树,这样可以大大的

机器学习算法需要注意的一些问题(二)

训练样本大小选取的问题 模型学习的准确度与数据样本大小有关,那么如何展示更多的样本与更好的准确度之间的关系呢? 我们可以通过不断增加训练数据,直到模型准确度趋于稳定.这个过程能够很好让你了解,你的系统对样本大小及相应调整有多敏感. 所以,训练样本首先不能太少,太少的数据不能代表数据的整体分布情况,而且容易过拟合.数据当然也不是越多越好,数据多到一定程度效果就不明显了.不过,这里假设数据是均匀分布增加的. 然而这里有另一种声音: 算法使用的数据越多,它的精度会更加准确,所以如果可能要尽量避免抽样.

经典的十个机器学习算法

1.C4.5 机器学习中,决策树是一个预测模型:他代表的是对象属性与对象值之间的一种映射关系.树中每个节点表示某个对象,而每个分叉路径则代表的某个可能的 属性值,而每个叶结点则对应从根节点到该叶节点所经历的路径所表示的对象的值.决策树仅有单一输出,若欲有复数输出,可以建立独立的决策树以处理不同输 出. 从数据产生决策树的机器学习技术叫做决策树学习, 通俗说就是决策树. 决策树学习也是数据挖掘中一个普通的方法.在这里,每个决策树都表述了一种树型结构,他由他的分支来对该类型的对象依靠属性进行分类.每

机器学习算法简介

欢迎大家前往腾讯云社区,获取更多腾讯海量技术实践干货哦~ 作者:吴懿伦 导语: 本文是对机器学习算法的一个概览,以及个人的学习小结.通过阅读本文,可以快速地对机器学习算法有一个比较清晰的了解.本文承诺不会出现任何数学公式及推导,适合茶余饭后轻松阅读,希望能让读者比较舒适地获取到一点有用的东西. 引言 本文是对机器学习算法的一个概览,以及个人的学习小结.通过阅读本文,可以快速地对机器学习算法有一个比较清晰的了解.本文承诺不会出现任何数学公式及推导,适合茶余饭后轻松阅读,希望能让读者比较舒适地获取到