Deep learning by Andrew Ng --- Linear Decoder

Sparse Autoencoder Recap:

Because we used a sigmoid activation function for f(z(3)), we needed to constrain or scale the inputs to be in the range [0,1], since the sigmoid function outputs numbers in the range [0,1].(sparse autoencoder输入层mean 0的原因)。

Linear Decoder:

only in the output layer that we use the linear activation function.

输出不再限制于【0,1】.所以不需要再使输入限制在【0,1】之间。原理就是output层的z不再用simoid方程的偏导进行处理,而是直接将z赋值给a,但是其他hidden layer层不变。

习题答案:

  • linearDecoderExercise.m
%% CS294A/CS294W Linear Decoder Exercise

%  Instructions
%  ------------
%
%  This file contains code that helps you get started on the
%  linear decoder exericse. For this exercise, you will only need to modify
%  the code in sparseAutoencoderLinearCost.m. You will not need to modify
%  any code in this file.

%%======================================================================
%% STEP 0: Initialization
%  Here we initialize some parameters used for the exercise.

imageChannels = 3;     % number of channels (rgb, so 3)

patchDim   = 8;          % patch dimension
numPatches = 100000;   % number of patches

visibleSize = patchDim * patchDim * imageChannels;  % number of input units
outputSize  = visibleSize;   % number of output units
hiddenSize  = 400;           % number of hidden units 

sparsityParam = 0.035; % desired average activation of the hidden units.
lambda = 3e-3;         % weight decay parameter
beta = 5;              % weight of sparsity penalty term       

epsilon = 0.1;         % epsilon for ZCA whitening

%%======================================================================
%% STEP 1: Create and modify sparseAutoencoderLinearCost.m to use a linear decoder,
%          and check gradients
%  You should copy sparseAutoencoderCost.m from your earlier exercise
%  and rename it to sparseAutoencoderLinearCost.m.
%  Then you need to rename the function from sparseAutoencoderCost to
%  sparseAutoencoderLinearCost, and modify it so that the sparse autoencoder
%  uses a linear decoder instead. Once that is done, you should check
% your gradients to verify that they are correct.

% NOTE: Modify sparseAutoencoderCost first!

% To speed up gradient checking, we will use a reduced network and some
% dummy patches

debugHiddenSize = 5;
debugvisibleSize = 8;
patches = rand([8 10]);
theta = initializeParameters(debugHiddenSize, debugvisibleSize); 

[cost, grad] = sparseAutoencoderLinearCost(theta, debugvisibleSize, debugHiddenSize, ...
                                           lambda, sparsityParam, beta, ...
                                           patches);

% Check gradients
numGrad = computeNumericalGradient( @(x) sparseAutoencoderLinearCost(x, debugvisibleSize, debugHiddenSize, ...
                                                  lambda, sparsityParam, beta, ...
                                                  patches), theta);

% Use this to visually compare the gradients side by side
disp([numGrad grad]); 

diff = norm(numGrad-grad)/norm(numGrad+grad);
% Should be small. In our implementation, these values are usually less than 1e-9.
disp(diff); 

assert(diff < 1e-9, ‘Difference too large. Check your gradient computation again‘);

% NOTE: Once your gradients check out, you should run step 0 again to
%       reinitialize the parameters
%}

%%======================================================================
%% STEP 2: Learn features on small patches
%  In this step, you will use your sparse autoencoder (which now uses a
%  linear decoder) to learn features on small patches sampled from related
%  images.

%% STEP 2a: Load patches
%  In this step, we load 100k patches sampled from the STL10 dataset and
%  visualize them. Note that these patches have been scaled to [0,1]

load stlSampledPatches.mat

displayColorNetwork(patches(:, 1:100));

%% STEP 2b: Apply preprocessing
%  In this sub-step, we preprocess the sampled patches, in particular,
%  ZCA whitening them.
%
%  In a later exercise on convolution and pooling, you will need to replicate
%  exactly the preprocessing steps you apply to these patches before
%  using the autoencoder to learn features on them. Hence, we will save the
%  ZCA whitening and mean image matrices together with the learned features
%  later on.

% Subtract mean patch (hence zeroing the mean of the patches)
meanPatch = mean(patches, 2);
patches = bsxfun(@minus, patches, meanPatch);

% Apply ZCA whitening
sigma = patches * patches‘ / numPatches;
[u, s, v] = svd(sigma);
ZCAWhite = u * diag(1 ./ sqrt(diag(s) + epsilon)) * u‘;
patches = ZCAWhite * patches;

displayColorNetwork(patches(:, 1:100));

%% STEP 2c: Learn features
%  You will now use your sparse autoencoder (with linear decoder) to learn
%  features on the preprocessed patches. This should take around 45 minutes.

theta = initializeParameters(hiddenSize, visibleSize);

% Use minFunc to minimize the function
addpath minFunc/

options = struct;
options.Method = ‘lbfgs‘;
options.maxIter = 400;
options.display = ‘on‘;

[optTheta, cost] = minFunc( @(p) sparseAutoencoderLinearCost(p, ...
                                   visibleSize, hiddenSize, ...
                                   lambda, sparsityParam, ...
                                   beta, patches), ...
                              theta, options);

% Save the learned features and the preprocessing matrices for use in
% the later exercise on convolution and pooling
fprintf(‘Saving learned features and preprocessing matrices...\n‘);
save(‘STL10Features.mat‘, ‘optTheta‘, ‘ZCAWhite‘, ‘meanPatch‘);
fprintf(‘Saved\n‘);

%% STEP 2d: Visualize learned features

W = reshape(optTheta(1:visibleSize * hiddenSize), hiddenSize, visibleSize);
b = optTheta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize);
displayColorNetwork( (W*ZCAWhite)‘);
  • sparseAutoencoderLinearCost.m

% -------------------- YOUR CODE HERE --------------------
% Instructions:
%   Copy sparseAutoencoderCost in sparseAutoencoderCost.m from your
%   earlier exercise onto this file, renaming the function to
%   sparseAutoencoderLinearCost, and changing the autoencoder to use a
%   linear decoder.
function [cost,grad] = sparseAutoencoderLinearCost(theta, visibleSize, hiddenSize, ...
                                             lambda, sparsityParam, beta, data)

% visibleSize: the number of input units (probably 64)
% hiddenSize: the number of hidden units (probably 25)
% lambda: weight decay parameter
% sparsityParam: The desired average activation for the hidden units (denoted in the lecture
%                           notes by the greek alphabet rho, which looks like a lower-case "p").
% beta: weight of sparsity penalty term
% data: Our 64x10000 matrix containing the training data.  So, data(:,i) is the i-th training example. 

% The input theta is a vector (because minFunc expects the parameters to be a vector).
% We first convert theta to the (W1, W2, b1, b2) matrix/vector format, so that this
% follows the notation convention of the lecture notes. 

W1 = reshape(theta(1:hiddenSize*visibleSize), hiddenSize, visibleSize);
W2 = reshape(theta(hiddenSize*visibleSize+1:2*hiddenSize*visibleSize), visibleSize, hiddenSize);
b1 = theta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize);
b2 = theta(2*hiddenSize*visibleSize+hiddenSize+1:end);

% Cost and gradient variables (your code needs to compute these values).
% Here, we initialize them to zeros.
cost = 0;
W1grad = zeros(size(W1));
W2grad = zeros(size(W2));
b1grad = zeros(size(b1));
b2grad = zeros(size(b2));

%% ---------- YOUR CODE HERE --------------------------------------
%  Instructions: Compute the cost/optimization objective J_sparse(W,b) for the Sparse Autoencoder,
%                and the corresponding gradients W1grad, W2grad, b1grad, b2grad.
%
% W1grad, W2grad, b1grad and b2grad should be computed using backpropagation.
% Note that W1grad has the same dimensions as W1, b1grad has the same dimensions
% as b1, etc.  Your code should set W1grad to be the partial derivative of J_sparse(W,b) with
% respect to W1.  I.e., W1grad(i,j) should be the partial derivative of J_sparse(W,b)
% with respect to the input parameter W1(i,j).  Thus, W1grad should be equal to the term
% [(1/m) \Delta W^{(1)} + \lambda W^{(1)}] in the last block of pseudo-code in Section 2.2
% of the lecture notes (and similarly for W2grad, b1grad, b2grad).
%
% Stated differently, if we were using batch gradient descent to optimize the parameters,
% the gradient descent update to W1 would be W1 := W1 - alpha * W1grad, and similarly for W2, b1, b2.
%
%H=zeros(size(data));
%m=size(data,2);
%sparsity_vec=zeros(hiddenSize,1);
%for index = 1:(m/1000)
  % z2=W1*data(:,index)+b1;
   %a2=sigmoid(z2);

  % for q = 1:hiddenSize
   %    sparsity_vec(q)=(1/m) *sum(sum( a2(q).*data));
   %end
 %  sparsity_delta=beta* (-(sparsityParam./sparsity_vec) + ( (1-sparsityParam)./(1.-sparsity_vec) ));

  % z3=W2*a2+b2;
   %a3=sigmoid(z3);
   %H(:,index)=a3;
   %delta3=-(data(:,index)-a3)   .*   ( a3.*(1-a3) );
   %delta2=(W2‘*delta3+sparsity_delta) .* (a2.*(1-a2));
   %if u want to use gradient checking,
   %make sure that
   %delta3*a2‘=g(theta)
   %W2grad =W2grad + delta3 * a2‘;
   %b2grad =b2grad + delta3;
   %W1grad = W1grad + delta2 * data(:,index)‘;
   %b1grad =b1grad + delta2;
%end
%alpha=10;
%W1=W1 - alpha*  ( ((1/m)*W1grad) +lambda*W1 );
%b1=b1 - alpha*((1/m)*b1grad);
%W2=W2 - alpha*  ( ((1/m)*W2grad) +lambda*W2 );
%b2=b2 - alpha*((1/m)*b2grad);
%J=(1/(2*m)) * sum(sum((H-data).^2)) + (lambda/2) * ( sum(sum(W1.^2)) + sum(sum(W2.^2)) );

%sparsity1=0;
%for j = 1:hiddenSize
 %   mid=(1/m) *sum(sum( a2(j)*data));
 %   sparsity1=sparsityParam*log(sparsityParam/mid) + (1-sparsityParam) * log((1-sparsityParam)/(1-mid));
    %cost=J + beta* sparsity1;
%end
Jcost = 0;%直接误差
Jweight = 0;%权值惩罚
Jsparse = 0;%稀疏性惩罚
[n m] = size(data);%m为样本的个数,n为样本的特征数

%前向算法计算各神经网络节点的线性组合值和active值
z2 = W1*data+repmat(b1,1,m);%注意这里一定要将b1向量复制扩展成m列的矩阵
a2 = sigmoid(z2);
z3 = W2*a2+repmat(b2,1,m);
a3 = z3;

% 计算预测产生的误差
Jcost = (0.5/m)*sum(sum((a3-data).^2));

%计算权值惩罚项
Jweight = (1/2)*(sum(sum(W1.^2))+sum(sum(W2.^2)));

%计算稀释性规则项
rho = (1/m).*sum(a2,2);%求出第一个隐含层的平均值向量
Jsparse = sum(sparsityParam.*log(sparsityParam./rho)+ ...
        (1-sparsityParam).*log((1-sparsityParam)./(1-rho)));

%损失函数的总表达式
cost = Jcost+lambda*Jweight+beta*Jsparse;

%反向算法求出每个节点的误差值
d3 = -(data-a3);
sterm = beta*(-sparsityParam./rho+(1-sparsityParam)./(1-rho));%因为加入了稀疏规则项,所以
                                                             %计算偏导时需要引入该项
d2 = (W2‘*d3+repmat(sterm,1,m)).*sigmoidInv(z2); 

%计算W1grad
W1grad = W1grad+d2*data‘;
W1grad = (1/m)*W1grad+lambda*W1;

%计算W2grad
W2grad = W2grad+d3*a2‘;
W2grad = (1/m).*W2grad+lambda*W2;

%计算b1grad
b1grad = b1grad+sum(d2,2);
b1grad = (1/m)*b1grad;%注意b的偏导是一个向量,所以这里应该把每一行的值累加起来

%计算b2grad
b2grad = b2grad+sum(d3,2);
b2grad = (1/m)*b2grad;

%-------------------------------------------------------------------
% After computing the cost and gradient, we will convert the gradients back
% to a vector format (suitable for minFunc).  Specifically, we will unroll
% your gradient matrices into a vector.

grad = [W1grad(:) ; W2grad(:) ; b1grad(:) ; b2grad(:)];

end

%-------------------------------------------------------------------
% Here‘s an implementation of the sigmoid function, which you may find useful
% in your computation of the costs and the gradients.  This inputs a (row or
% column) vector (say (z1, z2, z3)) and returns (f(z1), f(z2), f(z3)). 

function sigm = sigmoid(x)

    sigm = 1 ./ (1 + exp(-x));
end
function sigmInv = sigmoidInv(x)
    sigmInv = sigmoid(x).*(1-sigmoid(x));
end

% -------------------- YOUR CODE HERE --------------------                                    

时间: 2024-10-09 19:46:05

Deep learning by Andrew Ng --- Linear Decoder的相关文章

Deep Learning by Andrew Ng --- PCA and whitening

这是UFLDL的编程练习.具体教程参照官网. PCA PCA will find the priciple direction and the secodary direction in 2-dimention examples. then x~(i)=x(i)rot,1=uT1x(i)∈R. is big when x(i)rot,2=uT2x(i) was small. so PCA drop x(i)rot,2=uT2x(i) approximate them with 0's. Whit

Deep Learning by Andrew Ng --- DNN

When should we use fine-tuning? It is typically used only if you have a large labeled training set; in this setting, fine-tuning can significantly improve the performance of your classifier. However, if you have a large unlabeled dataset (for unsuper

Deep Learning by Andrew Ng --- Softmax regression

这是UFLDL的编程练习. Weight decay(Softmax 回归有一个不寻常的特点:它有一个"冗余"的参数集)后的cost function和梯度函数: cost function: J(θ)=?1m??∑i=1m∑j=1k1{y(i)=j}logeθTjx(i)∑kl=1eθTlx(i)??+λ2∑i=1k∑j=0nθ2ij 梯度函数: ?θjJ(θ)=?1m∑i=1m[x(i)(1{y(i)=j}?p(y(i)=j|x(i);θ))]+λθj p(y(i)=j|x(i);

Deep Learning(1) —— Andrew Ng

Binary Classification 把图像展平成一个列向量x,x作为输入得到输出y,y是一个判断是猫或不是猫的概率. Notation used in this course 如果有m个训练样本,直观的做法可能是用for循环遍历所有的样本.但是在深度学习中应该像上图这样,把m个样本合成一个m列的向量(或矩阵),从而实现并行计算. Logistic Regression Sigmoid函数:\(\displaystyle \sigma(z) = \frac{1}{1+e^{-z}}\) p

(原创)Stanford Machine Learning (by Andrew NG) --- (week 7) Support Vector Machines

本栏目内容来源于Andrew NG老师讲解的SVM部分,包括SVM的优化目标.最大判定边界.核函数.SVM使用方法.多分类问题等,Machine learning课程地址为:https://www.coursera.org/course/ml 大家对于支持向量机(SVM)可能会比较熟悉,是个强大且流行的算法,有时能解决一些复杂的非线性问题.我之前用过它的工具包libsvm来做情感分析的研究,感觉效果还不错.NG在进行SVM的讲解时也同样建议我们使用此类的工具来运用SVM. (一)优化目标(Opt

(原创)Stanford Machine Learning (by Andrew NG) --- (week 10) Large Scale Machine Learning &amp; Application Example

本栏目来源于Andrew NG老师讲解的Machine Learning课程,主要介绍大规模机器学习以及其应用.包括随机梯度下降法.维批量梯度下降法.梯度下降法的收敛.在线学习.map reduce以及应用实例:photo OCR.课程地址为:https://www.coursera.org/course/ml (一)大规模机器学习 从前面的课程我们知道,如果我们的系统是high variance的,那么增加样本数会改善我们的系统,假设现在我们有100万个训练样本,可想而知,如果使用梯度下降法,

Stanford CS229 Machine Learning by Andrew Ng

CS229 Machine Learning Stanford Course by Andrew Ng Course material, problem set Matlab code written by me, my notes about video course: https://github.com/Yao-Yao/CS229-Machine-Learning Contents: supervised learning Lecture 1 application field, pre-

转载 Deep learning:三(Multivariance Linear Regression练习)

前言: 本文主要是来练习多变量线性回归问题(其实本文也就3个变量),参考资料见网页:http://openclassroom.stanford.edu/MainFolder/DocumentPage.php?course=DeepLearning&doc=exercises/ex3/ex3.html.其实在上一篇博文Deep learning:二(linear regression练习)中已经简单介绍过一元线性回归问题的求解,但是那个时候用梯度下降法求解时,给出的学习率是固定的0.7.而本次实验

神经网络作业: NN LEARNING Coursera Machine Learning(Andrew Ng) WEEK 5

在WEEK 5中,作业要求完成通过神经网络(NN)实现多分类的逻辑回归(MULTI-CLASS LOGISTIC REGRESSION)的监督学习(SUOERVISED LEARNING)来识别阿拉伯数字.作业主要目的是感受如何在NN中求代价函数(COST FUNCTION)和其假设函数中各个参量(THETA)的求导值(GRADIENT DERIVATIVE)(利用BACKPROPAGGATION). 难度不高,但问题是你要习惯使用MALAB的矩阵QAQ,作为一名蒟蒻,我已经狗带了.以下代核心部