一个关于AdaBoost算法的简单证明

下载本文PDF格式(Academia.edu)

本文给出了机器学习中AdaBoost算法的一个简单初等证明,需要使用的数学工具为微积分-1.

Adaboost is a powerful algorithm for predicting models. However, a major disadvantage is that Adaboost may lead to over-fit in the presence of noise. Freund, Y. & Schapire, R. E. (1997) proved that the training error of the ensemble is bounded by the following expression: \begin{equation}\label{ada1}e_{ensemble}\le \prod_{t}2\cdot\sqrt{\epsilon_t\cdot(1-\epsilon_t)} \end{equation} where $\epsilon_t$ is the error rate of each base classifier $t$. If the error rate is less than 0.5, we can write $\epsilon_t=0.5-\gamma_t$, where $\gamma_t$ measures how much better the classifier is than random guessing (on binary problems). The bound on the training error of the ensemble becomes \begin{equation}\label{ada2} e_{ensemble}\le \prod_{t}\sqrt{1-4{\gamma_t}^2}\le e^{-2\sum_{t}{\gamma_t}^2} \end{equation} Thus if each base classifier is slightly better than random so that $\gamma_t>\gamma$ for some $\gamma>0$, then the training error drops exponentially fast. Nevertheless, because of its tendency to focus on training examples that are misclassified, Adaboost algorithm can be quite susceptible to over-fitting. We will give a new simple proof of \ref{ada1} and \ref{ada2}; additionally, we try to explain why the parameter $\alpha_t=\frac{1}{2}\cdot\log\frac{1-\epsilon_t}{\epsilon_t}$ in boosting algorithm.

AdaBoost Algorithm:

Recall the boosting algorithm is: Given $(x_1, y_1), (x_2, y_2), \cdots, (x_m, y_m)$, where $x_i\in X, y_i\in Y=\{-1, +1\}$.

Initialize $D_1(i)=\frac{1}{m}$. For $t=1, 2, \ldots, T$: Train weak learner using distribution $D_t$. Get weak hypothesis $h_t: X\rightarrow \{-1, +1\}$ with error \[\epsilon_t=\Pr_{i\sim D_t}[h_t (x_i)\ne y_i]\] If $\epsilon_i >0.5$, then the weights $D_t (i)$ are reverted back to their original uniform values $\frac{1}{m}$.

Choose \begin{equation}\label{boost3} \alpha_t=\frac{1}{2}\cdot \log\frac{1-\epsilon_t}{\epsilon_t} \end{equation}

Update: \begin{equation}\label{boost4} D_{t+1}(i)=\frac{D_{t}(i)}{Z_t}\times \left\{\begin{array}{c c} e^{-\alpha_t} & \quad \textrm{if $h_t(x_i)=y_i$}\\ e^{\alpha_t} & \quad \textrm{if $h_t(x_i)\ne y_i$} \end{array} \right. \end{equation} where $Z_t$ is a normalization factor.

Output the final hypothesis.

Proof: \[H(x)=sign(\sum_{t=1}^{T}\alpha_t\cdot h_t(x))\] Firstly, we will prove \ref{ada1}, note that $D_{t+1}(i)$ is the distribution and its summation $\sum_{i}D_{t+1}(i)$ equals 1, hence \[Z_t=\sum_{i}D_{t+1}(i)\cdot Z_t=\sum_{i}D_t(i)\times \left\{\begin{array}{c c} e^{-\alpha_t} & \quad \textrm{if $h_t(x_i)=y_i$}\\ e^{\alpha_t} & \quad \textrm{if $h_t(x_i)\ne y_i$} \end{array} \right.\] \[=\sum_{i:\ h_t(x_i)=y_i}D_t(i)\cdot e^{-\alpha_t}+\sum_{i:\ h_t(x_i)\ne y_i}D_t(i)\cdot e^{\alpha_t}\] \[=e^{-\alpha_t}\cdot \sum_{i:\ h_t(x_i)=y_i}D_t(i)+e^{\alpha_t}\cdot \sum_{i:\ h_t(x_i)\ne y_i}D_t(i)\] \begin{equation}\label{boost5} =e^{-\alpha_t}\cdot (1-\epsilon_t)+e^{\alpha_t}\cdot \epsilon_t \end{equation} In order to find $\alpha_t$ we can minimize $Z_t$ by making its first order derivative equal to 0. \[{[e^{-\alpha_t}\cdot (1-\epsilon_t)+e^{\alpha_t}\cdot \epsilon_t]}^{‘}=-e^{-\alpha_t}\cdot (1-\epsilon_t)+e^{\alpha_t}\cdot \epsilon_t=0\] \[\Rightarrow \alpha_t=\frac{1}{2}\cdot \log\frac{1-\epsilon_t}{\epsilon_t}\] which is \ref{boost3} in the boosting algorithm. Then we put $\alpha_t$ back to \ref{boost5} \[Z_t=e^{-\alpha_t}\cdot (1-\epsilon_t)+e^{\alpha_t}\cdot \epsilon_t=e^{-\frac{1}{2}\log\frac{1-\epsilon_t}{\epsilon_t}}\cdot (1-\epsilon_t)+e^{\frac{1}{2}\log\frac{1-\epsilon_t}{\epsilon_t}}\cdot\epsilon_t\] \begin{equation}\label{boost6} =2\sqrt{\epsilon_t\cdot(1-\epsilon_t)} \end{equation} On the other hand, derive from \ref{boost4} we have \[D_{t+1}(i)=\frac{D_t(i)\cdot e^{-\alpha_t\cdot y_i\cdot h_t(x_i)}}{Z_t}=\frac{D_t(i)\cdot e^{K_t}}{Z_t}\] Since the product will either be $1$ if $h_t (x_i )=y_i$ or $-1$ if $h_t (x_i )\ne y_i$. Thus we can write down all of the equations \[D_1(i)=\frac{1}{m}\] \[D_2(i)=\frac{D_1(i)\cdot e^{K_1}}{Z_1}\] \[D_3(i)=\frac{D_2(i)\cdot e^{K_2}}{Z_2}\] \[\ldots\ldots\ldots\] \[D_{t+1}(i)=\frac{D_t(i)\cdot e^{K_t}}{Z_t}\] Multiply all equalities above and obtain \[D_{t+1}(i)=\frac{1}{m}\cdot\frac{e^{-y_i\cdot f(x_i)}}{\prod_{t}Z_t}\] where $f(x_i)=\sum_{t}\alpha_t\cdot h_t(x_i)$. Thus \begin{equation}\label{boost7} \frac{1}{m}\cdot \sum_{i}e^{-y_i\cdot f(x_i)}=\sum_{i}D_{t+1}(i)\cdot\prod_{t}Z_t=\prod_{t}Z_t \end{equation} Note that if $\epsilon_i>0.5$ the data set will be re-sampled until $\epsilon_i\le0.5$. In other words, the parameter $\alpha_t\ge0$ in each valid iteration process. The training error of the ensemble can be expressed as \[e_{ensemble}=\frac{1}{m}\cdot\sum_{i}\left\{\begin{array}{c c} 1 & \quad \textrm{if $y_i\ne h_t(x_i)$}\\ 0 & \quad \textrm{if $y_i=h_t(x_i)$} \end{array} \right. =\frac{1}{m}\cdot \sum_{i}\left\{\begin{array}{c c} 1 & \quad \textrm{if $y_i\cdot f(x_i)\le0$}\\ 0 & \quad \textrm{if $y_i\cdot f(x_i)>0$} \end{array} \right.\] \begin{equation}\label{boost8} \le\frac{1}{m}\cdot\sum_{i}e^{-y_i\cdot f(x_i)}=\prod_{t}Z_t \end{equation} The last step derives from \ref{boost7}. According to \ref{boost6} and \ref{boost8}, we proved \ref{ada1} \begin{equation}\label{boost9} e_{ensemble}\le \prod_{t}2\cdot\sqrt{\epsilon_t\cdot(1-\epsilon_t)} \end{equation} In order to prove \ref{ada2}, we have to firstly proof the following inequality: \begin{equation}\label{boost10} 1+x\le e^x \end{equation} Or the equivalence $e^x-x-1\ge0$. Let $f(x)=e^x-x-1$, then \[f^{‘}(x)=e^x-1=0\Rightarrow x=0\] Since $f^{‘‘}(x)=e^x>0$, so \[{f(x)}_{min}=f(0)=0\Rightarrow e^x-x-1\ge0\] which is desired. Now we go back to \ref{boost9} and let \[\epsilon_t=\frac{1}{2}-\gamma_t\] where $\gamma_t$ measures how much better the classifier is than random guessing (on binary problems). Based on \ref{boost10} we have \[e_{ensemble}\le\prod_{t}2\cdot\sqrt{\epsilon_t\cdot(1-\epsilon_t)}\] \[=\prod_{t}\sqrt{1-4\gamma_t^2}\] \[=\prod_{t}[1+(-4\gamma_t^2)]^{\frac{1}{2}}\] \[\le\prod_{t}(e^{-4\gamma_t^2})^\frac{1}{2}=\prod_{t}e^{-2\gamma_t^2}\] \[=e^{-2\cdot\sum_{t}\gamma_t^2}\] as desired.

时间: 2024-10-12 16:09:12

一个关于AdaBoost算法的简单证明的相关文章

Adaboost 算法

一 Boosting 算法的起源 boost 算法系列的起源来自于PAC Learnability(PAC 可学习性).这套理论主要研究的是什么时候一个问题是可被学习的,当然也会探讨针对可学习的问题的具体的学习算法.这套理论是由Valiant提出来的,也因此(还有其他贡献哈)他获得了2010年的图灵奖.这里也贴出Valiant的头像,表示下俺等菜鸟的膜拜之情.哈哈哈 PAC 定义了学习算法的强弱   弱学习算法---识别错误率小于1/2(即准确率仅比随机猜测略高的学习算法)   强学习算法---

写一个个人认为比较详细的adaboost算法

最近在看机器学习中adaboost(adaptive boostint)算法部分的内容,在csdn上面查找一番发现,好像没有讲的特别的详尽的,当然可能是我人品不佳,所以没有找到,为了防止同样的事情发生在其他人的身上,所以就写了这篇博文,尽量多的解释算法的推演过程更方便的大家去理解这个算法. 介绍adaboost算法之前,首先介绍一下学习算法的强弱,这个是PAC定义的:弱学习算法---识别错误率小于1/2(即准确率仅比随机猜测略高的学习算法),强学习算法---识别准确率很高并能在多项式时间内完成的

Adaboost算法结合Haar-like特征

本文主要是介绍自适应提升学习算法(Adaptive Boosting,Adaboost)原理,因为其中涉及到分类器,而分类器往往是基于一些样本特征得到的,所以在此介绍最常用的Adaboost算法结合Haar-like特征.该算法首先在人脸检测中得到广泛运用,而后也被用于其它有关目标检测中. 1.Adaboost算法 1.1 Adaboost算法的前生 在了解Adaboost之前,先了解一下Boosting算法. 回答一个是与否的问题,随机猜测可以获得50%的正确率.若一种方法能获得比随机猜测稍微

Adaboost 算法的原理与推导

0 引言 一直想写Adaboost来着,但迟迟未能动笔.其算法思想虽然简单“听取多人意见,最后综合决策”,但一般书上对其算法的流程描述实在是过于晦涩.昨日11月1日下午,邹博在我组织的机器学习班第8次课上讲决策树与Adaboost,其中,Adaboost讲得酣畅淋漓,讲完后,我知道,可以写本篇博客了. 无心啰嗦,本文结合邹博之决策树与Adaboost 的PPT 跟<统计学习方法>等参考资料写就,可以定义为一篇课程笔记.读书笔记或学习心得,有何问题或意见,欢迎于本文评论下随时不吝指出,thank

Adaboost 算法的原理与推导——转载及修改完善

<Adaboost算法的原理与推导>一文为他人所写,原文链接: http://blog.csdn.net/v_july_v/article/details/40718799 另外此文大部分是摘录李航的<统计学笔记>一书,原书下载链接:http://vdisk.weibo.com/s/z4UjMcqGpoNTw?from=page_100505_profile&wvr=6 在根据文中推导是发现有计算错误以及省略的步骤,在下文将会进行说明. ------------------

Adaboost算法原理分析和实例+代码(简明易懂)

Adaboost算法原理分析和实例+代码(简明易懂) [尊重原创,转载请注明出处] http://blog.csdn.net/guyuealian/article/details/70995333     本人最初了解AdaBoost算法着实是花了几天时间,才明白他的基本原理.也许是自己能力有限吧,很多资料也是看得懵懵懂懂.网上找了一下关于Adaboost算法原理分析,大都是你复制我,我摘抄你,反正我也搞不清谁是原创.有些资料给出的Adaboost实例,要么是没有代码,要么省略很多步骤,让初学者

leetcode中第一题twosum问题解答算法的可行性证明

leetcode中第一题twosum问题解答算法的可行性证明 一.引入 关于leetcode中第一题twosum问题,网上已有不少高人做出过解答,并提出了切实可行的算法实现.我在解答该题时参考了博客http://www.zixue7.com/article-9576-1.html的解答.为让读者更直观地阅读和理解本文,先简要摘录以上博客的内容如下: 题目还原 Two Sum Given an array of integers, find two numbers such that they a

集成学习之Adaboost算法原理小结

在集成学习原理小结中,我们讲到了集成学习按照个体学习器之间是否存在依赖关系可以分为两类,第一个是个体学习器之间存在强依赖关系,另一类是个体学习器之间不存在强依赖关系.前者的代表算法就是是boosting系列算法.在boosting系列算法中, Adaboost是最著名的算法之一.Adaboost既可以用作分类,也可以用作回归.本文就对Adaboost算法做一个总结. 1. 回顾boosting算法的基本原理 在集成学习原理小结中,我们已经讲到了boosting算法系列的基本思想,如下图: 从图中

机器学习&amp;数据挖掘笔记_16(常见面试之机器学习算法思想简单梳理)

http://www.cnblogs.com/tornadomeet/p/3395593.html 机器学习&数据挖掘笔记_16(常见面试之机器学习算法思想简单梳理) 前言: 找工作时(IT行业),除了常见的软件开发以外,机器学习岗位也可以当作是一个选择,不少计算机方向的研究生都会接触这个,如果你的研究方向是机器学习/数据挖掘之类,且又对其非常感兴趣的话,可以考虑考虑该岗位,毕竟在机器智能没达到人类水平之前,机器学习可以作为一种重要手段,而随着科技的不断发展,相信这方面的人才需求也会越来越大.