scikit-learn:3.3. Model evaluation: quantifying the quality of predictions

參考:http://scikit-learn.org/stable/modules/model_evaluation.html#scoring-parameter

三种方法评估模型的预測质量:

最后介绍 Dummy estimators 。提供随机推測的策略,能够作为预測质量评价的baseline。

(參考第六小节)

See also

For “pairwise” metrics, between samples and not estimators or predictions, see the Pairwise
metrics, Affinities and Kernels
 section.

详细内容有时间再写。。

1、

The scoring parameter: defining model evaluation rules

Model selection and evaluation using tools, such as grid_search.GridSearchCV and cross_validation.cross_val_score,
take a scoring parameter
that controls what metric they apply to the estimators evaluated.

1)提前定义的标准

全部的scorer都是越大越好。因此mean_absolute_error and mean_squared_error(測量预測点离模型的距离)是负值。

Scoring Function Comment
Classification    
‘accuracy’ metrics.accuracy_score  
‘average_precision’ metrics.average_precision_score  
‘f1’ metrics.f1_score for binary targets
‘f1_micro’ metrics.f1_score micro-averaged
‘f1_macro’ metrics.f1_score macro-averaged
‘f1_weighted’ metrics.f1_score weighted average
‘f1_samples’ metrics.f1_score by multilabel sample
‘log_loss’ metrics.log_loss requires predict_proba support
‘precision’ etc. metrics.precision_score suffixes apply as with ‘f1’
‘recall’ etc. metrics.recall_score suffixes apply as with ‘f1’
‘roc_auc’ metrics.roc_auc_score  
Clustering    
‘adjusted_rand_score’ metrics.adjusted_rand_score  
Regression    
‘mean_absolute_error’ metrics.mean_absolute_error  
‘mean_squared_error’ metrics.mean_squared_error  
‘median_absolute_error’ metrics.median_absolute_error  
‘r2’ metrics.r2_score  

给个样例:

>>> from sklearn import svm, cross_validation, datasets
>>> iris = datasets.load_iris()
>>> X, y = iris.data, iris.target
>>> model = svm.SVC()
>>> cross_validation.cross_val_score(model, X, y, scoring=‘wrong_choice‘)
Traceback (most recent call last):
ValueError: ‘wrong_choice‘ is not a valid scoring value. Valid options are [‘accuracy‘, ‘adjusted_rand_score‘, ‘average_precision‘, ‘f1‘, ‘f1_macro‘, ‘f1_micro‘, ‘f1_samples‘, ‘f1_weighted‘, ‘log_loss‘, ‘mean_absolute_error‘, ‘mean_squared_error‘, ‘median_absolute_error‘, ‘precision‘, ‘precision_macro‘, ‘precision_micro‘, ‘precision_samples‘, ‘precision_weighted‘, ‘r2‘, ‘recall‘, ‘recall_macro‘, ‘recall_micro‘, ‘recall_samples‘, ‘recall_weighted‘, ‘roc_auc‘]
>>> clf = svm.SVC(probability=True, random_state=0)
>>> cross_validation.cross_val_score(clf, X, y, scoring=‘log_loss‘)
array([-0.07..., -0.16..., -0.06...])

3)自己定义scoring标准

following two rules:

  • It can be called with parameters (estimator, X, y),
    where estimator is the model that should be evaluated, X is
    validation data, and y is the ground truth target for X (in
    the supervised case) or None (in the unsupervised case).
  • It returns a floating point number that quantifies the estimator prediction
    quality on X, with reference to y.
    Again, by convention higher numbers are better, so if your scorer returns loss, that value should be negated.

2、

Classification metrics

The sklearn.metrics module
implements several loss, score, and utility functions to measure classification performance.

Some of these are restricted to the binary classification case:

matthews_corrcoef(y_true, y_pred) Compute the Matthews correlation coefficient (MCC) for binary classes
precision_recall_curve(y_true, probas_pred) Compute precision-recall pairs for different probability thresholds
roc_curve(y_true, y_score[, pos_label, ...]) Compute Receiver operating characteristic (ROC)

Others also work in the multiclass case:

confusion_matrix(y_true, y_pred[, labels]) Compute confusion matrix to evaluate the accuracy of a classification
hinge_loss(y_true, pred_decision[, labels, ...]) Average hinge loss (non-regularized)

Some also work in the multilabel case:

accuracy_score(y_true, y_pred[, normalize, ...]) Accuracy classification score.
classification_report(y_true, y_pred[, ...]) Build a text report showing the main classification metrics
f1_score(y_true, y_pred[, labels, ...]) Compute the F1 score, also known as balanced F-score or F-measure
fbeta_score(y_true, y_pred, beta[, labels, ...]) Compute the F-beta score
hamming_loss(y_true, y_pred[, classes]) Compute the average Hamming loss.
jaccard_similarity_score(y_true, y_pred[, ...]) Jaccard similarity coefficient score
log_loss(y_true, y_pred[, eps, normalize, ...]) Log loss, aka logistic loss or cross-entropy loss.
precision_recall_fscore_support(y_true, y_pred) Compute precision, recall, F-measure and support for each class
precision_score(y_true, y_pred[, labels, ...]) Compute the precision
recall_score(y_true, y_pred[, labels, ...]) Compute the recall
zero_one_loss(y_true, y_pred[, normalize, ...]) Zero-one classification loss.

And some work with binary and multilabel (but not multiclass) problems:

average_precision_score(y_true, y_score[, ...]) Compute average precision (AP) from prediction scores
roc_auc_score(y_true, y_score[, average, ...]) Compute Area Under the Curve (AUC) from prediction scores

In the following sub-sections, we will describe each of those functions, preceded by some notes on common API and metric definition.

2)accuracy score:

The accuracy_score function
computes the accuracy,
默认是计算预測正确的比例,假设设置normalize=False。计算预測正确的绝对数量。给个样例就明确:

>>> import numpy as np
>>> from sklearn.metrics import accuracy_score
>>> y_pred = [0, 2, 1, 3]
>>> y_true = [0, 1, 2, 3]
>>> accuracy_score(y_true, y_pred)
0.5
>>> accuracy_score(y_true, y_pred, normalize=False)
2

对于multilabel classification,仅仅有所有的labels所有预測对。该sample才算预測对。

给个样例就明确:

>>> accuracy_score(np.array([[0, 1], [1, 1]]), np.ones((2, 2)))
0.5

再參考:

3)confusion
matrix:

The confusion_matrix function
evaluates classification accuracy by computing the confusion
matrix
. 给个样例:

>>> from sklearn.metrics import confusion_matrix
>>> y_true = [2, 0, 2, 2, 0, 1]
>>> y_pred = [0, 0, 2, 2, 0, 2]
>>> confusion_matrix(y_true, y_pred)
array([[2, 0, 0],
       [0, 0, 1],
       [1, 0, 2]])

(注意:纵轴是true label,横轴是predict label)

再參考:

4)classification
report:

The classification_report function
builds a text report showing the main classification metrics. 给个样例:

>>> from sklearn.metrics import classification_report
>>> y_true = [0, 1, 2, 2, 0]
>>> y_pred = [0, 0, 2, 2, 0]
>>> target_names = [‘class 0‘, ‘class 1‘, ‘class 2‘]
>>> print(classification_report(y_true, y_pred, target_names=target_names))
             precision    recall  f1-score   support

    class 0       0.67      1.00      0.80         2
    class 1       0.00      0.00      0.00         1
    class 2       1.00      1.00      1.00         2

avg / total       0.67      0.80      0.72         5

再參考:

以下的一些不经常使用,简单列出来。不做过多解释和翻译:

5)hamming
loss:

If  is
the predicted value for the -th
label
of a given sample,  is
the corresponding true value, and  is
the number of classes or labels, then the Hamming loss  between
two samples is defined as:

6)jaccard
similarity coefficient score:

The Jaccard similarity coefficient of the -th samples,
with a ground truth label set  and predicted label set ,
is defined as

7)precision、recall、f-measures:

Several functions allow you to analyze the precision, recall and F-measures score:

average_precision_score(y_true, y_score[, ...]) Compute average precision (AP) from prediction scores
f1_score(y_true, y_pred[, labels, ...]) Compute the F1 score, also known as balanced F-score or F-measure
fbeta_score(y_true, y_pred, beta[, labels, ...]) Compute the F-beta score
precision_recall_curve(y_true, probas_pred) Compute precision-recall pairs for different probability thresholds
precision_recall_fscore_support(y_true, y_pred) Compute precision, recall, F-measure and support for each class
precision_score(y_true, y_pred[, labels, ...]) Compute the precision
recall_score(y_true, y_pred[, labels, ...]) Compute the recall

Note that the precision_recall_curve function
is restricted to the binary case. The average_precision_score function
works only in binary classification and multilabel indicator format.

8)hinge loss:

9)log loss:

10)matthews
correlation coefficient:

11)receiver
operating characteristic(ROC):

12)zero one loss:

3、

Multilabel ranking metrics

In multilabel learning, each sample can have any number of ground truth labels associated with it. The goal is to give
high scores and better rank to the ground truth labels.

1)coverage error:

2)label ranking average precision:

4、

Regression metrics

The sklearn.metrics module
implements several loss, score, and utility functions to measure regression performance.

Some of those have been enhanced to handle the multioutput case: mean_absolute_errormean_squared_errormedian_absolute_error and r2_score.

1)explained variance score:

If  is
the estimated target output,  the
corresponding (correct) target output, and  is Variance,
the square of the standard deviation, then the explained variance is estimated as follow:

2)mean absolute error:

If  is
the predicted value of the -th
sample, and  is
the corresponding true value, then the mean absolute error (MAE) estimated over  is
defined as

watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQv/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center" >

3)mean squared error:

If  is
the predicted value of the -th
sample, and  is
the corresponding true value, then the mean squared error (MSE) estimated over  is
defined as

4)R^2 score、the coefficient of determination:

If  is
the predicted value of the -th
sample and  is
the corresponding true value, then the score R2 estimated over  is
defined as

watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQv/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/Center" >

5、

Clustering metrics

The sklearn.metrics module
implements several loss, score, and utility functions. For more information see the Clustering
performance evaluation
 section for instance clustering, and Biclustering
evaluation
 for biclustering.

6、Dummy estimators

对于supervised learning。使用随机产生的结果作为baseline是非常easy的对照。

DummyClassifier提供了产生随机结果的简单的策略:

  • stratified generates random predictions by respecting the training set class distribution.
  • most_frequent always predicts the most frequent label in the training set.
  • uniform generates predictions uniformly at random.
  • constant always predicts a constant label that is provided by the user.(A
    major motivation of this method is F1-scoring, when the positive class is in the minority.)

Note that with all these strategies, the predict method completely ignores the input data!

给个简单样例:

first let’s create an imbalanced dataset:

>>>

>>> from sklearn.datasets import load_iris
>>> from sklearn.cross_validation import train_test_split
>>> iris = load_iris()
>>> X, y = iris.data, iris.target
>>> y[y != 1] = -1
>>> X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)

Next, let’s compare the accuracy of SVC and most_frequent:

>>>

>>> from sklearn.dummy import DummyClassifier
>>> from sklearn.svm import SVC
>>> clf = SVC(kernel=‘linear‘, C=1).fit(X_train, y_train)
>>> clf.score(X_test, y_test)
0.63...
>>> clf = DummyClassifier(strategy=‘most_frequent‘,random_state=0)
>>> clf.fit(X_train, y_train)
DummyClassifier(constant=None, random_state=0, strategy=‘most_frequent‘)
>>> clf.score(X_test, y_test)
0.57...

We see that SVC doesn’t do much better than a dummy classifier. Now, let’s change the kernel:

>>>

>>> clf = SVC(kernel=‘rbf‘, C=1).fit(X_train, y_train)
>>> clf.score(X_test, y_test)
0.97...

同理,对于回归问题:

DummyRegressor also
implements four simple rules of thumb for regression:

  • mean always predicts the mean of the training targets.
  • median always predicts the median of the training targets.
  • quantile always predicts a user provided quantile of the training targets.
  • constant always predicts a constant value that is provided by the user.

In all these strategies, the predict method completely ignores the input data.

时间: 2024-10-19 17:48:21

scikit-learn:3.3. Model evaluation: quantifying the quality of predictions的相关文章

Query意图分析:记一次完整的机器学习过程(scikit learn library学习笔记)

所谓学习问题,是指观察由n个样本组成的集合,并根据这些数据来预测未知数据的性质. 学习任务(一个二分类问题): 区分一个普通的互联网检索Query是否具有某个垂直领域的意图.假设现在有一个O2O领域的垂直搜索引擎,专门为用户提供团购.优惠券的检索:同时存在一个通用的搜索引擎,比如百度,通用搜索引擎希望能够识别出一个Query是否具有O2O检索意图,如果有则调用O2O垂直搜索引擎,获取结果作为通用搜索引擎的结果补充. 我们的目的是学习出一个分类器(classifier),分类器可以理解为一个函数,

scikit learn 模块 调参 pipeline+girdsearch 数据举例:文档分类

scikit learn 模块 调参 pipeline+girdsearch 数据举例:文档分类数据集 fetch_20newsgroups #-*- coding: UTF-8 -*- import numpy as np from sklearn.pipeline import Pipeline from sklearn.linear_model import SGDClassifier from sklearn.grid_search import GridSearchCV from sk

Python之扩展包安装(scikit learn)

scikit learn 是Python下开源的机器学习包.(安装环境:win7.0 32bit和Python2.7) Python安装第三方扩展包较为方便的方法:easy_install + packages name 在官网 https://pypi.python.org/pypi/setuptools/#windows-simplified 下载名字为 的文件. 在命令行窗口运行 ,安装后,可在python2.7文件夹下生成Scripts文件夹.把路径D:\Python27\Scripts

棋盘的多米诺覆盖:Dimer Lattice Model,Pfaff 多项式,Kasteleyn 定理

这次来介绍计数组合学里面一个经典的问题:Dimer Lattice Model.问题是这样的:一个有 64 个方格的国际象棋棋盘,有多少种不同的多米诺骨牌覆盖?这里的覆盖是指不重复不遗漏地盖住整个棋盘. 下图是一种可能的覆盖方式(图片来自 Wiki 百科): 这个问题的答案是 12988816,非常大的一个数字,绝对不是一个一个数出来的.1961 年德国物理学家 Kasteleyn 借助于线性代数中的一个结论首先解决了这个问题,我们接下来就介绍他的方法. ~~~~~~~~~~~~~~~~~~~~

(一)、BOM:Browser Object Model

BOM window 打开关闭窗口 窗口大小和窗口位置 ****定时器 (一).BOM:Browser Object Model 浏览器对象模型:用来访问和操作浏览器窗口,使JS有能力与浏览器交互. 专门操作浏览器窗口的API--没有标准,有兼容性问题 浏览器对象模型的主要对象 window:代表整个窗口是BOM的根对象 2个角色:1.代替global称为全局作用域对象  2.封装所有DOM API 和BOM API 以下为window的子对象 1.history:封装当前窗口打开后,成功访问过

JS--bom对象:borswer object model浏览器对象模型

bom对象:borswer object model浏览器对象模型 navigator获取客户机的信息(浏览器的信息) navigator.appName;获得浏览器的名称 window:窗口对象 alert();弹出框 confirm(msg);确认框 prompt(提示信息,defaultText):输入框 open("打开的新窗口的地址URL"," ","窗口特征,比如窗口的宽度和高度"); window.open("hello.

JS--dom对象:document object model文档对象模型

dom对象:document object model文档对象模型 文档:超文本标记文档 html xml 对象:提供了属性和方法 模型:使用属性和方法操作超文本标记性文档 可以使用js里面的DOM提供的对象,使用这些对象的属性和方法,对标记性文档进行操作 想要对标记性文档进行操作,首先需要对标记性文档里面的所有内容封装成对象 对HTML 标签 属性 文本内容都封装为对象 要想对标记性文档进行操作,解析标记性文档 --使用DOM解析HTML过程 根据HTML的层级结构,在内存中分配一个树形结构,

Selenium的PO模式:Page Object Model

PO模式:全称:Page Object Model 简称POM,叫做页面对象,针对页面.UI界面 (译:配只.奥播摘可t.毛豆) 什么是Page Object Model 设计模式? 相似功能地方: 代码基本都是一样的,界面元素换个查找方式,把原来的使用 xpath方式,改为使用 id 查找,需要对每个用例脚本都要改,虽然几个用例看不出什么工作量,但是重复findElement的代码,已经让我们感到了代码的笨重.如果某些定位发生了改变,我们就得贯穿整个测试代码进行调整元素定位,这样就会导致我们的

scikit-learn:3.4. Model persistence

参考:http://scikit-learn.org/stable/modules/model_persistence.html 训练了模型之后,我们希望可以保存下来,遇到新样本时直接使用已经训练好的保存了的模型,而不用重新再训练模型.本节介绍pickle在保存模型方面的应用.(After training a scikit-learn model, it is desirable to have a way to persist the model for future use without