模型评估---交叉验证

1.原始交叉验证

# Import the linear regression class
from sklearn.linear_model import LinearRegression
# Sklearn also has a helper that makes it easy to do cross validation
from sklearn.cross_validation import KFold
import numpy as np

# The columns we‘ll use to predict the target
predictors = ["Pclass", "Sex", "Age", "SibSp", "Parch", "Fare", "Embarked"]

# Initialize our algorithm class
alg = LinearRegression()
# Generate cross validation folds for the titanic dataset.  It return the row indices corresponding to train and test.
# We set random_state to ensure we get the same splits every time we run this.
kf = KFold(titanic.shape[0], n_folds=3, random_state=1)

predictions = []
for train, test in kf:
    # The predictors we‘re using the train the algorithm.  Note how we only take the rows in the train folds.
    train_predictors = (titanic[predictors].iloc[train,:])
    # The target we‘re using to train the algorithm.
    train_target = titanic["Survived"].iloc[train]
    # Training the algorithm using the predictors and target.
    alg.fit(train_predictors, train_target)
    # We can now make predictions on the test fold
    test_predictions = alg.predict(titanic[predictors].iloc[test,:])
    predictions.append(test_predictions)

# The predictions are in three separate numpy arrays.  Concatenate them into one.
# We concatenate them on axis 0, as they only have one axis.
predictions = np.concatenate(predictions, axis=0)

# Map predictions to outcomes (only possible outcomes are 1 and 0)
predictions[predictions > .5] = 1
predictions[predictions <=.5] = 0
accuracy = sum(predictions[predictions == titanic["Survived"]]) / len(predictions)
print accuracy
0.783389450056

2.cross_validation交叉验证

from sklearn import cross_validation
from sklearn.linear_model import LogisticRegression
# Initialize our algorithm
alg = LogisticRegression(random_state=1)
# Compute the accuracy score for all the cross validation folds.  (much simpler than what we did before!)
scores = cross_validation.cross_val_score(alg, titanic[predictors], titanic["Survived"], cv=3)
# Take the mean of the scores (because we have one for each fold)
print(scores.mean())
0.787878787879
from sklearn import cross_validation
from sklearn.ensemble import RandomForestClassifier

predictors = ["Pclass", "Sex", "Age", "SibSp", "Parch", "Fare", "Embarked"]

# Initialize our algorithm with the default paramters
# n_estimators is the number of trees we want to make
# min_samples_split is the minimum number of rows we need to make a split
# min_samples_leaf is the minimum number of samples we can have at the place where a tree branch ends (the bottom points of the tree)
alg = RandomForestClassifier(random_state=1, n_estimators=10, min_samples_split=2, min_samples_leaf=1)
# Compute the accuracy score for all the cross validation folds.  (much simpler than what we did before!)
kf = cross_validation.KFold(titanic.shape[0], n_folds=3, random_state=1)
scores = cross_validation.cross_val_score(alg, titanic[predictors], titanic["Survived"], cv=kf)

# Take the mean of the scores (because we have one for each fold)
print(scores.mean())
0.785634118967

3. 多模型投票分类
predictors = ["Pclass", "Sex", "Age", "Fare", "Embarked", "FamilySize", "Title"]

algorithms = [
    [GradientBoostingClassifier(random_state=1, n_estimators=25, max_depth=3), predictors],
    [LogisticRegression(random_state=1), ["Pclass", "Sex", "Fare", "FamilySize", "Title", "Age", "Embarked"]]
]

full_predictions = []
for alg, predictors in algorithms:
    # Fit the algorithm using the full training data.
    alg.fit(titanic[predictors], titanic["Survived"])
    # Predict using the test dataset.  We have to convert all the columns to floats to avoid an error.
    predictions = alg.predict_proba(titanic_test[predictors].astype(float))[:,1]
    full_predictions.append(predictions)

# The gradient boosting classifier generates better predictions, so we weight it higher.
predictions = (full_predictions[0] * 3 + full_predictions[1]) / 4
predictions

交叉验证中混合模型分类

from sklearn.ensemble import GradientBoostingClassifier
import numpy as np

# The algorithms we want to ensemble.
# We‘re using the more linear predictors for the logistic regression, and everything with the gradient boosting classifier.
algorithms = [
    [GradientBoostingClassifier(random_state=1, n_estimators=25, max_depth=3), ["Pclass", "Sex", "Age", "Fare", "Embarked", "FamilySize", "Title",]],
    [LogisticRegression(random_state=1), ["Pclass", "Sex", "Fare", "FamilySize", "Title", "Age", "Embarked"]]
]

# Initialize the cross validation folds
kf = KFold(titanic.shape[0], n_folds=3, random_state=1)

predictions = []
for train, test in kf:
    train_target = titanic["Survived"].iloc[train]
    full_test_predictions = []
    # Make predictions for each algorithm on each fold
    for alg, predictors in algorithms:
        # Fit the algorithm on the training data.
        alg.fit(titanic[predictors].iloc[train,:], train_target)
        # Select and predict on the test fold.
        # The .astype(float) is necessary to convert the dataframe to all floats and avoid an sklearn error.
        test_predictions = alg.predict_proba(titanic[predictors].iloc[test,:].astype(float))[:,1]
        full_test_predictions.append(test_predictions)
    # Use a simple ensembling scheme -- just average the predictions to get the final classification.
    test_predictions = (full_test_predictions[0] + full_test_predictions[1]) / 2
    # Any value over .5 is assumed to be a 1 prediction, and below .5 is a 0 prediction.
    test_predictions[test_predictions <= .5] = 0
    test_predictions[test_predictions > .5] = 1
    predictions.append(test_predictions)

# Put all the predictions together into one array.
predictions = np.concatenate(predictions, axis=0)

# Compute accuracy by comparing to the training data.
accuracy = sum(predictions[predictions == titanic["Survived"]]) / len(predictions)
print(accuracy)

原文地址:https://www.cnblogs.com/itbuyixiaogong/p/9848761.html

时间: 2024-09-30 20:54:32

模型评估---交叉验证的相关文章

用交叉验证改善模型的预测表现

预测模型为何无法保持稳定? 让我们通过以下几幅图来理解这个问题: 此处我们试图找到尺寸(size)和价格(price)的关系.三个模型各自做了如下工作: 第一个模型使用了线性等式.对于训练用的数据点,此模型有很大误差.这样的模型在初期排行榜和最终排行榜都会表现不好.这是"拟合不足"("Under fitting")的一个例子.此模型不足以发掘数据背后的趋势. 第二个模型发现了价格和尺寸的正确关系,此模型误差低/概括程度高. 第三个模型对于训练数据几乎是零误差.这是因

【scikit-learn】交叉验证及其用于參数选择、模型选择、特征选择的样例

?? 内容概要? 训练集/測试集切割用于模型验证的缺点 K折交叉验证是怎样克服之前的不足 交叉验证怎样用于选择调节參数.选择模型.选择特征 改善交叉验证 1. 模型验证回想? 进行模型验证的一个重要目的是要选出一个最合适的模型,对于监督学习而言,我们希望模型对于未知数据的泛化能力强,所以就须要模型验证这一过程来体现不同的模型对于未知数据的表现效果. 最先我们用训练精确度(用所有数据进行训练和測试)来衡量模型的表现,这样的方法会导致模型过拟合:为了解决这一问题,我们将所有数据分成训练集和測试集两部

机器学习-CrossValidation交叉验证详解

版权声明:本文为原创文章,转载请注明来源. 1.原理 1.1 概念 交叉验证(Cross-validation)主要用于模型训练或建模应用中,如分类预测.PCR.PLS回归建模等.在给定的样本空间中,拿出大部分样本作为训练集来训练模型,剩余的小部分样本使用刚建立的模型进行预测,并求这小部分样本的预测误差或者预测精度,同时记录它们的加和平均值.这个过程迭代K次,即K折交叉.其中,把每个样本的预测误差平方加和,称为PRESS(predicted Error Sum of Squares). 1.2

sklearn中模型评估和预测

一.模型验证方法如下: 通过交叉验证得分:model_sleection.cross_val_score(estimator,X) 对每个输入数据点产生交叉验证估计:model_selection.cross_val_predict(estimator,X) 计算并绘制模型的学习率曲线:model_selection.learning_curve(estimator,X,y) 计算并绘制模型的验证曲线:model_selection.validation(estimator,...) 通过排序评

Spark2.0机器学习系列之2:基于Pipeline、交叉验证、ParamMap的模型选择和超参数调优

Spark中的CrossValidation Spark中采用是k折交叉验证 (k-fold cross validation).举个例子,例如10折交叉验证(10-fold cross validation),将数据集分成10份,轮流将其中9份做训练1份做验证,10次的结果的均值作为对算法精度的估计. 10折交叉检验最常见,是因为通过利用大量数据集.使用不同学习技术进行的大量试验,表明10折是获得最好误差估计的恰当选择,而且也有一些理论根据可以证明这一点.但这并非最终结论,争议仍然存在.而且似

模型选择的方法:AIC,k-折交叉验证

AIC 此处模型选择我们只考虑模型参数数量,不涉及模型结构的选择. 很多参数估计问题均采用似然函数作为目标函数,当训练数据足够多时,可以不断提高模型精度,但是以提高模型复杂度为代价的,同时带来一个机器学习中非常普遍的问题--过拟合.所以,模型选择问题在模型复杂度与模型对数据集描述能力(即似然函数)之间寻求最佳平衡. 人们提出许多信息准则,通过加入模型复杂度的惩罚项来避免过拟合问题,此处我们介绍一下常用的两个模型选择方法--赤池信息准则(Akaike Information Criterion,A

使用交叉验证对鸢尾花分类模型进行调参(超参数)

如何选择超参数: 交叉验证: 如图, 大训练集分块,使用不同的分块方法分成N对小训练集和验证集. 使用小训练集进行训练,使用验证集进行验证,得到准确率,求N个验证集上的平均正确率: 使用平均正确率最高的超参数,对整个大训练集进行训练,训练出参数. 在训练集上训练. 十折交叉验证 网格搜索 诸如你有多个可调节的超参数,那么选择超参数的方法通常是网格搜索,即固定一个参.变化其他参,像网格一样去搜索. # 人工智能数据源下载地址:https://video.mugglecode.com/data_ai

小白学习之pytorch框架(6)-模型选择(K折交叉验证)、欠拟合、过拟合(权重衰减法(=L2范数正则化)、丢弃法)、正向传播、反向传播

下面要说的基本都是<动手学深度学习>这本花书上的内容,图也采用的书上的 首先说的是训练误差(模型在训练数据集上表现出的误差)和泛化误差(模型在任意一个测试数据集样本上表现出的误差的期望) 模型选择 验证数据集(validation data set),又叫验证集(validation set),指用于模型选择的在train set和test set之外预留的一小部分数据集 若训练数据不够时,预留验证集也是一种luxury.常采用的方法为K折交叉验证.原理为:把train set分割成k个不重合

机器学习 libsvm交叉验证与网格搜索(参数选择)

首先说交叉验证. 交叉验证(Cross validation)是一种评估统计分析.机器学习算法对独立于训练数据的数据集的泛化能力(generalize), 能够避免过拟合问题. 交叉验证一般要尽量满足: 1)训练集的比例要足够多,一般大于一半 2)训练集和测试集要均匀抽样 交叉验证主要分成以下几类: 1)Double cross-validation Double cross-validation也称2-fold cross-validation(2-CV),作法是将数据集分成两个相等大小的子集