引言
逻辑回归常用于预测疾病发生的概率,例如因变量是是否恶性肿瘤,自变量是肿瘤的大小、位置、硬度、患者性别、年龄、职业等等(很多文章里举了这个例子,但现代医学发达,可以通过病理检查,即获取标本放到显微镜下观察是否恶变来判断);广告界中也常用于预测点击率或者转化率(cvr/ctr),例如因变量是是否点击,自变量是物料的长、宽、广告的位置、类型、用户的性别、爱好等等。
本章主要介绍逻辑回归算法推导、梯度下降法求最优值的推导及spark的源码实现。
常规方法
一般回归问题的步骤是:
1. 寻找预测函数(h函数,hypothesis)
2. 构造损失函数(J函数)
3. 使损失函数最小,获得回归系数θ
而第三步中常见的算法有:
1. 梯度下降
2. 牛顿迭代算法
3. 拟牛顿迭代算法(BFGS算法和L-BFGS算法)
其中随机梯度下降和L-BFGS在spark mllib中已经实现,梯度下降是最简单和容易理解的。
推导
二元逻辑回归
- 构造预测函数
hθ(x)=g(θTx)=11+e?θTx
其中:
θTx=∑i=1nθixi=θ0+θ1x1+θ2x2+...+θnxnθ=?????θ0θ1...θn?????,x=?????x0x1...xn?????
为何LR模型偏偏选择sigmoid 函数呢?逻辑回归不是回归问题,而是二分类问题,因变量不是0就是1,那么我们很自然的认为概率函数服从伯努利分布,而伯努利分布的指数形式就是个sigmoid 函数。
函数hθ(x)表示结果取1的概率,那么对于分类1和0的概率分别为:
P(y=1|x;θ)=hθ(x)P(y=0|x;θ)=1?hθ(x)
概率一般式为:
P(y|x;θ)=(hθ(x))y((1?hθ(x)))1?y
- 最大似然估计的思想
当从模型总体随机抽取m组样本观测值后,我们的目标是寻求最合理的参数估计θ′使得从模型中抽取该m组样本观测值的概率最大。最大似然估计就是解决此类问题的方法。求最大似然函数的步骤是:
- 写出似然函数
- 对似然函数取对数
- 对对数似然函数的参数求偏导并令其为0,得到方程组
- 求方程组的参数
为什么第三步要取对数呢,因为取对数后,乘法就变成加法了,且单调性一致,不会改变极值的位置,后边就更好的求偏导。
- 构造损失函数
线性回归中的损失函数是:
J(θ)=12m∑i=1m(yi?hθ(xi))2
线性回归损失函数有很明显的实际意义,就是平方损失。而逻辑回归却不是,它的预测函数hθ(x)明显是非线性的,如果类比的使用线性回归的损失函数于逻辑回归,那J(θ)很有可能就是非凸函数,即存在很多局部最优解,但不一定是全局最优解。我们希望构造一个凸函数,也就是一个碗型函数做为逻辑回归的损失函数。
按照求最大似然函数的方法,逻辑回归似然函数:
L(θ)=∏i=1mP(yi|xi;θ)=∏i=1m(hθ(xi))yi((1?hθ(xi)))1?yi
其中m表示样本数量,取对数:
l(θ)=logL(θ)=∑i=1m(yiloghθ(xi)+(1?yi)log(1?hθ(xi)))
我们的目标是求最大l(θ)时的θ,如上函数是一个上凸函数,可以使用梯度上升来求得最大似然函数值(最大值)。或者上式乘以-1,变成下凸函数,就可以使用梯度下降来求得最小负似然函数值(最小值):
J(θ)=?1ml(θ)
同样是取极小值,思想与损失函数一致,即我们把如上的J(θ)作为逻辑回归的损失函数。Andrew Ng的课程中,上式乘了一个系数1/m,我怀疑就是为了和线性回归的损失函数保持一致吧。
- 求最小值时的参数
我们求最大似然函数参数的第三步时,令对参数θ偏导=0,然后求解方程组。考虑到参数数量的不确定,即参数数量很大,此时直接求解方程组的解变的很困难,或者根本就求不出精确的参数。于是,我们用随机梯度下降法,求解方程组的值。
当然也可以使用牛顿法、拟牛顿法。梯度下降法是最容易理解和推导的,如下是推导过程:
梯度下降θ的更新过程,走梯度方向的反方向:
θj:=θj?αδδθjJ(θ)
其中:
δδθjJ(θ)=?1m∑i=1m(yi1hθ(xi)δδθjhθ(xi)?(1?yi)11?hθ(xi)δδθjhθ(xi))=?1m∑i=1m(yi1g(θTxi)?(1?yi)11?g(θTxi))δδθjg(θTxi)=?1m∑i=1m(yi1g(θTxi)?(1?yi)11?g(θTxi))g(θTxi)(1?g(θTxi))δδθjθTxi=?1m∑i=1m(yi(1?g(θTxi))?(1?yi)g(θTxi))xji=?1m∑i=1m(yi?g(θTxi))xji=1m∑i=1m(hθ(xi)?yi))xji
第二步推导请注意:
(f(x)g(x))′=g(x)f′(x)?f(x)g′(x)g2(x)(ex)′=ex
那么可以推导:
δδθjg(θTxi)=?e?θTxi(1+e?θTxi)2δδθj(?1)θTxi=g(θTxi)(1?g(θTxi))δθjθTxi
因此更新过程可以写成:
θj:=θj?α1m∑i=1m(hθ(xi)?yi))xji
那迭代多少次停止呢,spark是指定迭代次数和比较两次梯度变化或者cost变化小于一定值时停止。
- 过拟合问题
过拟合问题,即我们求得的回归系数在实验集中效果很好,但之外的数据效果很差。机器学习中的特征基本上是靠人的经验选择的,有可能某一些特征或者特征组合与因变量没有任何关系,即某些θi≈0。所以我们需要把不必要的特征剔除,一般我们使用正则化来保留所有特征,并让它相应的系数≈0,L1范数正则化后θ的更新:
θj:=θj?α1m∑i=1m(hθ(xi)?yi))xji?λmθj
λ越大,对模型的复杂度惩罚越大,有可能出现欠拟合现象。λ越小,惩罚越小,可能新出现过拟合现象。spark逻辑回归的随机梯度下降法中,使用的是L2范数正则化。
多元逻辑回归
推广到K元逻辑回归,即因变量为0、1、2、…、k-1。在二元逻辑回归中有这样的性质:
logP(y=1|x,θ)P(y=0|x,θ)=θTx
推广至K元逻辑回归:
logP(y=1|x,θ)P(y=0|x,θ)=θT1xlogP(y=2|x,θ)P(y=0|x,θ)=θT2x...logP(y=K?1|x,θ)P(y=0|x,θ)=θTK?1x
其中,θ=(θ1,θ2,...,θK?1)T,是个(k-1)*(n+1)的矩阵,n为特征的个数,加1是增加截距项。去除对数则得到概率分布:
P(y=0|x,θ)=11+∑K?1i=1eθTixP(y=1|x,θ)=eθT1x1+∑K?1i=1eθTix...P(y=K?1|x,θ)=eθTK?1x1+∑K?1i=1eθTix
K元逻辑回归似然函数:
L(θ)=∏i=1mP(y|x,θ)
定义:
α(yi)=1ifyi=0α(yi)=0ifyi≠0
取对数:
l(θ,x)=∑i=1mlogP(yi|xi,θ)=∑i=1mα(yi)logP(y=0|xi,θ)+(1?α(yi))logP(yi|xi,θ)=∑i=1mα(yi)log11+∑K?1k=1eθTkx+(1?α(yi))logeθTyix1+∑K?1k=1eθTkx=∑i=1m(1?α(yi))θTyix?log(1+∑k=1K?1eθTkx)
同样的我们得到损失函数:
J(θ,x)=?1ml(θ,x)
θ更新过程:
θj:=θj?αδδθjJ(θ,x)
对θ求偏导得到梯度:
Gkj(θ,x)=?1mδl(θ,x)δθkj=?1m(∑i=1m(1?α(yi))xijδk,yi?eθTxi1+eθTxixij)
其中k表示因变量,j表示特征数量,i表示实验数。
spark源码注释中,稍稍不一样,l(w,x)乘以了-1,其实与我们上边推导的?1m意义一样。我们来看看spark的推导过程。
P(y=0|x,w)=1/(1+∑iK?1exp(xwi))P(y=1|x,w)=exp(xw1)/(1+∑iK?1exp(xwi))...P(y=K?1|x,w)=exp(xwK?1)/(1+∑iK?1exp(xwi)
取对数:
l(w,x)=?logP(y|x,w)=?α(y)logP(y=0|x,w)?(1?α(y))logP(y|x,w)=log(1+∑iK?1exp(xwi))?(1?α(y))xwy?1=log(1+∑iK?1exp(marginsi))?(1?α(y))marginsy?1
其中:
α(i)=1ifi!=0,α(i)=0ifi==0,marginsi=xwi.
求偏导:
?l(w,x)?wij=(exp(xwi)/(1+∑kK?1exp(xwk))?(1?α(y)δy,i+1))?xj=multiplieri?xj
其中:
δi,j=1ifi==j,δi,j=0ifi!=j,multiplier=exp(marginsi)/(1+∑kK?1exp(marginsi))?(1?α(y)δy,i+1)
为了不让数值溢出,xw项减了maxMargin,l(w,x)改写为:
l(w,x)=log(1+∑iK?1exp(marginsi))?(1?α(y))marginsy?1=log(exp(?maxMargin)+∑iK?1exp(marginsi?maxMargin))+maxMargin?(1?α(y))marginsy?1=log(1+sum)+maxMargin?(1?α(y))marginsy?1
其中:
sum=exp(?maxMargin)+∑iK?1exp(marginsi?maxMargin)?1
而multiplier可以表示为:
multiplier=exp(marginsi)/(1+∑kK?1exp(marginsi))?(1?α(y)δy,i+1)=exp(marginsi?maxMargin)/(1+sum)?(1?α(y)δy,i+1)
spark源码
先看实例代码:
import org.apache.spark.SparkContext
import org.apache.spark.mllib.classification.{LogisticRegressionWithLBFGS, LogisticRegressionModel}
import org.apache.spark.mllib.evaluation.MulticlassMetrics
import org.apache.spark.mllib.regression.LabeledPoint
import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.mllib.util.MLUtils
// Load training data in LIBSVM format.
//样例数据格式:
//1 特征id1:值id1 特征id2:值id2 ...
//0 特征id1:值id3 特征id4:值id4 ...
//特征和特征对应的值都使用数值一一标示了
val data = MLUtils.loadLibSVMFile(sc, "data/mllib/sample_libsvm_data.txt")
// Split data into training (60%) and test (40%).
val splits = data.randomSplit(Array(0.6, 0.4), seed = 11L)
val training = splits(0).cache()
val test = splits(1)
// Run training algorithm to build the model
// 官方样例分类数设置为10,但样例数据因变量是0和1,所以这里应该时设置错了.
// 梯度下降法每次迭代都会变量整个样本集,推荐使用拟牛顿法LBFGS,后续文章中继续介绍
val model = new LogisticRegressionWithLBFGS()
.setNumClasses(10)
.run(training)
// Compute raw scores on the test set.
val predictionAndLabels = test.map { case LabeledPoint(label, features) =>
val prediction = model.predict(features)
(prediction, label)
}
// Get evaluation metrics.
val metrics = new MulticlassMetrics(predictionAndLabels)
val precision = metrics.precision
println("Precision = " + precision)
// Save and load model
// 输出是个模型,就是一个向量$\theta$,带入概率分布函数求得类型的概率
model.save(sc, "myModelPath")
val sameModel = LogisticRegressionModel.load(sc, "myModelPath")
随机梯度下降调用:
/**
* Train a logistic regression model given an RDD of (label, features) pairs. We run a fixed
* number of iterations of gradient descent using the specified step size. Each iteration uses
* `miniBatchFraction` fraction of the data to calculate the gradient. The weights used in
* gradient descent are initialized using the initial weights provided.
* NOTE: Labels used in Logistic Regression should be {0, 1}
*
* @param input RDD of (label, array of features) pairs.
* @param numIterations Number of iterations of gradient descent to run.迭代次数
* @param stepSize Step size to be used for each iteration of gradient descent.步长
* @param miniBatchFraction Fraction of data to be used per iteration.用于模型预估数据的比例
* @param initialWeights Initial set of weights to be used. Array should be equal in size to the number of features in the data.初始化权重
*/
@Since("1.0.0")
def train(
input: RDD[LabeledPoint],
numIterations: Int,
stepSize: Double,
miniBatchFraction: Double,
initialWeights: Vector): LogisticRegressionModel = {
new LogisticRegressionWithSGD(stepSize, numIterat2 Aions, 0.0, miniBatchFraction)
.run(input, initialWeights)
}
LogisticRegressionWithLBFGS和LogisticRegressionWithSGD都继承于GeneralizedLinearModel,它的run方法:
def run(input: RDD[LabeledPoint], initialWeights: Vector): M = {
if (numFeatures < 0) {
// 输入的特征数等于第一行特征个数.
numFeatures = input.map(_.features.size).first()
}
// 输入数据的存储类别.
if (input.getStorageLevel == StorageLevel.NONE) {
logWarning("The input data is not directly cached, which may hurt performance if its"
+ " parent RDDs are also uncached.")
}
// Check the data properties before running the optimizer
if (validateData && !validators.forall(func => func(input))) {
throw new SparkException("Input validation failed.")
}
/**
* Scaling columns to unit variance as a heuristic to reduce the condition number:
*
* During the optimization process, the convergence (rate) depends on the condition number of
* the training dataset. Scaling the variables often reduces this condition number
* heuristically, thus improving the convergence rate. Without reducing the condition number,
* some training datasets mixing the columns with different scales may not be able to converge.
*
* GLMNET and LIBSVM packages perform the scaling to reduce the condition number, and return
* the weights in the original scale.
* See page 9 in http://cran.r-project.org/web/packages/glmnet/glmnet.pdf
*
* Here, if useFeatureScaling is enabled, we will standardize the training features by dividing
* the variance of each column (without subtracting the mean), and train the model in the
* scaled space. Then we transform the coefficients from the scaled space to the original scale
* as GLMNET and LIBSVM do.
*通过每一列除以这一列的标准差,将数据标准化.LBFGS算法中可以启用.
* Currently, it‘s only enabled in LogisticRegressionWithLBFGS
*/
val scaler = if (useFeatureScaling) {
new StandardScaler(withStd = true, withMean = false).fit(input.map(_.features))
} else {
null
}
// Prepend an extra variable consisting of all 1.0‘s for the intercept.
// TODO: Apply feature scaling to the weight vector instead of input data.
// 默认是不加入截距项的
val data =
if (addIntercept) {
if (useFeatureScaling) {
input.map(lp => (lp.label, appendBias(scaler.transform(lp.features)))).cache()
} else {
input.map(lp => (lp.label, appendBias(lp.features))).cache()
}
} else {
if (useFeatureScaling) {
input.map(lp => (lp.label, scaler.transform(lp.features))).cache()
} else {
input.map(lp => (lp.label, lp.features))
}
}
/**
* TODO: For better convergence, in logistic regression, the intercepts should be computed
* from the prior probability distribution of the outcomes; for linear regression,
* the intercept should be set as the average of response.
*/
val initialWeightsWithIntercept = if (addIntercept && numOfLinearPredictor == 1) {
appendBias(initialWeights)
} else {
/** If `numOfLinearPredictor > 1`, initialWeights already contains intercepts. */
initialWeights
}
//SGD 或者 LBFGS算法
val weightsWithIntercept = optimizer.optimize(data, initialWeightsWithIntercept)
...
createModel(weights, intercept)
}
梯度下降SGD实现:
def runMiniBatchSGD(
data: RDD[(Double, Vector)],
gradient: Gradient,
updater: Updater,
stepSize: Double,
numIterations: Int,
regParam: Double,
miniBatchFraction: Double,
initialWeights: Vector,
convergenceTol: Double): (Vector, Array[Double]) = {
...
//不知道此数组干啥用的
val stochasticLossHistory = new ArrayBuffer[Double](numIterations)
...
// Initialize weights as a column vector
var weights = Vectors.dense(initialWeights.toArray)
val n = weights.size
/**
* For the first iteration, the regVal will be initialized as sum of weight squares
* if it‘s L2 updater; for L1 updater, the same logic is followed.
*/
var regVal = updater.compute(
weights, Vectors.zeros(weights.size), 0, 1, regParam)._2
var converged = false // indicates whether converged based on convergenceTol
var i = 1
while (!converged && i <= numIterations) {
val bcWeights = data.context.broadcast(weights)
// Sample a subset (fraction miniBatchFraction) of the total data
// compute and sum up the subgradients on this subset (this is one map-reduce)
val (gradientSum, lossSum, miniBatchSize) = data.sample(false, miniBatchFraction, 42 + i)
.treeAggregate((BDV.zeros[Double](n), 0.0, 0L))(
seqOp = (c, v) => {
// c: (grad, loss, count), v: (label, features)
// 返回损失loss,没看明白为何要算loss,及loss为何这么算log(1 + exp(margin))
// 主要目的时计算c._1梯度向量
val l = gradient.compute(v._2, v._1, bcWeights.value, Vectors.fromBreeze(c._1))
(c._1, c._2 + l, c._3 + 1)
},
combOp = (c1, c2) => {
// c: (grad, loss, count)
(c1._1 += c2._1, c1._2 + c2._2, c1._3 + c2._3)
})
if (miniBatchSize > 0) {
/**
* lossSum is computed using the weights from the previous iteration
* and regVal is the regularization value computed in the previous iteration as well.
*/
stochasticLossHistory.append(lossSum / miniBatchSize + regVal)
// 正则化
val update = updater.compute(
weights, Vectors.fromBreeze(gradientSum / miniBatchSize.toDouble),
stepSize, i, regParam)
weights = update._1
regVal = update._2
previousWeights = currentWeights
currentWeights = Some(weights)
if (previousWeights != None && currentWeights != None) {
converged = isConverged(previousWeights.get,
currentWeights.get, convergenceTol)
}
} else {
logWarning(s"Iteration ($i/$numIterations). The size of sampled batch is zero")
}
i += 1
}
logInfo("GradientDescent.runMiniBatchSGD finished. Last 10 stochastic losses %s".format(
stochasticLossHistory.takeRight(10).mkString(", ")))
(weights, stochasticLossHistory.toArray)
}
combOp是θ的更新过程中的∑过程.在二元逻辑回归情况下:
case 2 =>
/**
* For Binary Logistic Regression.
*
* Although the loss and gradient calculation for multinomial one is more generalized,
* and multinomial one can also be used in binary case, we still implement a specialized
* binary version for performance reason.
*/
val margin = -1.0 * dot(data, weights)
val multiplier = (1.0 / (1.0 + math.exp(margin))) - label
axpy(multiplier, data, cumGradient)
if (label > 0) {
// The following is equivalent to log(1 + exp(margin)) but more numerically stable.
MLUtils.log1pExp(margin)
} else {
MLUtils.log1pExp(margin) - margin
}
margin 就是(?θTx),而multiplier就是hθ(xi)?yi.axpy方法就是(hθ(xi)?yi))xi.
参考
- http://blog.csdn.net/pakko/article/details/37878837
- http://spark.apache.org/docs/latest/mllib-linear-methods.html
- http://www.slideshare.net/dbtsai/2014-0620-mlor-36132297
- https://www.codecogs.com/latex/eqneditor.php?lang=zh-cn(数学公式编辑器)