PCA and Whitening on natural images

Step 0: Prepare data
Step 0a: Load data

The starter code contains code to load a set of natural images and sample 12x12 patches from them. The raw patches will look something like this:

These patches are stored as column vectors in the matrix x.

Step 0b: Zero mean the data

First, for each image patch, compute the mean pixel value and subtract it from that image, this centering the image around zero. You should compute a different mean value for each image patch.

Step 1: Implement PCA
Step 1a: Implement PCA

In this step, you will implement PCA to obtain xrot, the matrix in which the data is "rotated" to the basis comprising the principal components

Step 1b: Check covariance

To verify that your implementation of PCA is correct, you should check the covariance matrix for the rotated data xrot. PCA guarantees that the covariance matrix for the rotated data is a diagonal matrix (a matrix with non-zero entries only along the main diagonal). Implement code to compute the covariance matrix and verify this property. One way to do this is to compute the covariance matrix, and visualise it using the MATLAB command imagesc.

Step 2: Find number of components to retain

Next, choose k, the number of principal components to retain. Pick k to be as small as possible, but so that at least 99% of the variance is retained.

Step 3: PCA with dimension reduction

Now that you have found k, compute , the reduced-dimension representation of the data. This gives you a representation of each image patch as a k dimensional vector instead of a 144 dimensional vector.

Raw images

PCA dimension-reduced images
(99% variance)


PCA dimension-reduced images
(90% variance)

Step 4: PCA with whitening and regularization
Step 4a: Implement PCA with whitening and regularization
Step 4b: Check covariance

Similar to using PCA alone, PCA with whitening also results in processed data that has a diagonal covariance matrix. However, unlike PCA alone, whitening additionally ensures that the diagonal entries are equal to 1, i.e. that the covariance matrix is the identity matrix.That would be the case if you were doing whitening alone with no regularization. However, in this case you are whitening with regularization, to avoid numerical/etc. problems associated with small eigenvalues. As a result of this, some of the diagonal entries of the covariance of your xPCAwhite will be smaller than 1.

Covariance for PCA whitening with regularization

Covariance for PCA whitening without regularization

Step 5: ZCA whitening

Now implement ZCA whitening to produce the matrix xZCAWhite. Visualize xZCAWhite and compare it to the raw data

实验过程及结果

随机选取10000个patch,并显示其中204个patch,如下图所示:

然后对这些patch做均值为0化操作得到如下图:

对选取出的patch做PCA变换得到新的样本数据,其新样本数据的协方差矩阵如下图所示:

保留99%的方差后的PCA还原原始数据,如下所示:

PCA Whitening后的图像如下:

此时样本patch的协方差矩阵如下:

ZCA Whitening的结果如下:

Code

%%================================================================
%% Step 0a: Load data
%  Here we provide the code to load natural image data into x.
%  x will be a 144 * 10000 matrix, where the kth column x(:, k) corresponds to
%  the raw image data from the kth 12x12 image patch sampled.
%  You do not need to change the code below.

x = sampleIMAGESRAW();
figure(‘name‘,‘Raw images‘);
randsel = randi(size(x,2),204,1); % A random selection of samples for visualization
display_network(x(:,randsel));%为什么x有负数还可以显示?

%%================================================================
%% Step 0b: Zero-mean the data (by row)
%  You can make use of the mean and repmat/bsxfun functions.

% -------------------- YOUR CODE HERE --------------------
x = x-repmat(mean(x,1),size(x,1),1);%求的是每一列的均值
%x = x-repmat(mean(x,2),1,size(x,2));

%%================================================================
%% Step 1a: Implement PCA to obtain xRot
%  Implement PCA to obtain xRot, the matrix in which the data is expressed
%  with respect to the eigenbasis of sigma, which is the matrix U.

% -------------------- YOUR CODE HERE --------------------
xRot = zeros(size(x)); % You need to compute this
[n m] = size(x);
sigma = (1.0/m)*x*x‘;
[u s v] = svd(sigma);
xRot = u‘*x;

%%================================================================
%% Step 1b: Check your implementation of PCA
%  The covariance matrix for the data expressed with respect to the basis U
%  should be a diagonal matrix with non-zero entries only along the main
%  diagonal. We will verify this here.
%  Write code to compute the covariance matrix, covar.
%  When visualised as an image, you should see a straight line across the
%  diagonal (non-zero entries) against a blue background (zero entries).

% -------------------- YOUR CODE HERE --------------------
covar = zeros(size(x, 1)); % You need to compute this
covar = (1./m)*xRot*xRot‘;

% Visualise the covariance matrix. You should see a line across the
% diagonal against a blue background.
figure(‘name‘,‘Visualisation of covariance matrix‘);
imagesc(covar);

%%================================================================
%% Step 2: Find k, the number of components to retain
%  Write code to determine k, the number of components to retain in order
%  to retain at least 99% of the variance.

% -------------------- YOUR CODE HERE --------------------
k = 0; % Set k accordingly
ss = diag(s);
% for k=1:m
%    if sum(s(1:k))./sum(ss) < 0.99
%        continue;
% end
%其中cumsum(ss)求出的是一个累积向量,也就是说ss向量值的累加值
%并且(cumsum(ss)/sum(ss))<=0.99是一个向量,值为0或者1的向量,为1表示满足那个条件
k = length(ss((cumsum(ss)/sum(ss))<=0.99));

%%================================================================
%% Step 3: Implement PCA with dimension reduction
%  Now that you have found k, you can reduce the dimension of the data by
%  discarding the remaining dimensions. In this way, you can represent the
%  data in k dimensions instead of the original 144, which will save you
%  computational time when running learning algorithms on the reduced
%  representation.
%
%  Following the dimension reduction, invert the PCA transformation to produce
%  the matrix xHat, the dimension-reduced data with respect to the original basis.
%  Visualise the data and compare it to the raw data. You will observe that
%  there is little loss due to throwing away the principal components that
%  correspond to dimensions with low variation.

% -------------------- YOUR CODE HERE --------------------
xHat = zeros(size(x));  % You need to compute this
xHat = u*[u(:,1:k)‘*x;zeros(n-k,m)];

% Visualise the data, and compare it to the raw data
% You should observe that the raw and processed data are of comparable quality.
% For comparison, you may wish to generate a PCA reduced image which
% retains only 90% of the variance.

figure(‘name‘,[‘PCA processed images ‘,sprintf(‘(%d / %d dimensions)‘, k, size(x, 1)),‘‘]);
display_network(xHat(:,randsel));
figure(‘name‘,‘Raw images‘);
display_network(x(:,randsel));

%%================================================================
%% Step 4a: Implement PCA with whitening and regularisation
%  Implement PCA with whitening and regularisation to produce the matrix
%  xPCAWhite. 

epsilon = 0.1;
xPCAWhite = zeros(size(x));

% -------------------- YOUR CODE HERE --------------------
xPCAWhite = diag(1./sqrt(diag(s)+epsilon))*u‘*x;
figure(‘name‘,‘PCA whitened images‘);
display_network(xPCAWhite(:,randsel));

%%================================================================
%% Step 4b: Check your implementation of PCA whitening
%  Check your implementation of PCA whitening with and without regularisation.
%  PCA whitening without regularisation results a covariance matrix
%  that is equal to the identity matrix. PCA whitening with regularisation
%  results in a covariance matrix with diagonal entries starting close to
%  1 and gradually becoming smaller. We will verify these properties here.
%  Write code to compute the covariance matrix, covar.
%
%  Without regularisation (set epsilon to 0 or close to 0),
%  when visualised as an image, you should see a red line across the
%  diagonal (one entries) against a blue background (zero entries).
%  With regularisation, you should see a red line that slowly turns
%  blue across the diagonal, corresponding to the one entries slowly
%  becoming smaller.

% -------------------- YOUR CODE HERE --------------------
covar = (1./m)*xPCAWhite*xPCAWhite‘;

% Visualise the covariance matrix. You should see a red line across the
% diagonal against a blue background.
figure(‘name‘,‘Visualisation of covariance matrix‘);
imagesc(covar);

%%================================================================
%% Step 5: Implement ZCA whitening
%  Now implement ZCA whitening to produce the matrix xZCAWhite.
%  Visualise the data and compare it to the raw data. You should observe
%  that whitening results in, among other things, enhanced edges.

xZCAWhite = zeros(size(x));

% -------------------- YOUR CODE HERE --------------------
xZCAWhite = u*xPCAWhite;

% Visualise the data, and compare it to the raw data.
% You should observe that the whitened images have enhanced edges.
figure(‘name‘,‘ZCA whitened images‘);
display_network(xZCAWhite(:,randsel));
figure(‘name‘,‘Raw images‘);
display_network(x(:,randsel));
时间: 2024-10-10 20:50:52

PCA and Whitening on natural images的相关文章

【DeepLearning】Exercise:PCA and Whitening

Exercise:PCA and Whitening 习题链接:Exercise:PCA and Whitening pca_gen.m %%================================================================ %% Step 0a: Load data % Here we provide the code to load natural image data into x. % x will be a 144 * 10000 matr

Deep Learning by Andrew Ng --- PCA and whitening

这是UFLDL的编程练习.具体教程参照官网. PCA PCA will find the priciple direction and the secodary direction in 2-dimention examples. then x~(i)=x(i)rot,1=uT1x(i)∈R. is big when x(i)rot,2=uT2x(i) was small. so PCA drop x(i)rot,2=uT2x(i) approximate them with 0's. Whit

UFLDL教程之(三)PCA and Whitening exercise

Exercise:PCA and Whitening 第0步:数据准备 UFLDL下载的文件中,包含数据集IMAGES_RAW,它是一个512*512*10的矩阵,也就是10幅512*512的图像 (a)载入数据 利用sampleIMAGESRAW函数,从IMAGES_RAW中提取numPatches个图像块儿,每个图像块儿大小为patchSize,并将提取到的图像块儿按列存放,分别存放在在矩阵patches的每一列中,即patches(:,i)存放的是第i个图像块儿的所有像素值 (b)数据去均

PCA和Whitening

PCA: PCA的具有2个功能,一是维数约简(可以加快算法的训练速度,减小内存消耗等),一是数据的可视化. PCA并不是线性回归,因为线性回归是保证得到的函数是y值方面误差最小,而PCA是保证得到的函数到所降的维度上的误差最小.另外线性回归是通过x值来预测y值,而PCA中是将所有的x样本都同等对待. 在使用PCA前需要对数据进行预处理,首先是均值化,即对每个特征维,都减掉该维的平均值,然后就是将不同维的数据范围归一化到同一范围,方法一般都是除以最大值.但是比较奇怪的是,在对自然图像进行均值处理时

Deep Learning五:PCA and Whitening_Exercise(斯坦福大学UFLDL深度学习教程)

前言 本文是基于Exercise:PCA and Whitening的练习. 理论知识见:UFLDL教程. 实验内容:从10张512*512自然图像中随机选取10000个12*12的图像块(patch),然后对这些patch进行99%的方差保留的PCA计算,最后对这些patch做PCA Whitening和ZCA Whitening,并进行比较. 实验步骤及结果 1.加载图像数据,得到10000个图像块为原始数据x,它是144*10000的矩阵,随机显示200个图像块,其结果如下: 2.把它的每

stanford推荐阅读目录

stanford推荐阅读目录 stanford deep learning 网站上推荐的阅读目录: UFLDL Recommended Readings If you're learning about UFLDL (Unsupervised Feature Learning and Deep Learning), here is a list of papers to consider reading. We're assuming you're already familiar with b

UFLDL tutorial 代码分析

之前一直没怎么接触过代码,前段时间朋友提起了caffe.本想看看caffe怎么用,无奈自己太渣了,不会用……想起之前也没怎么接触过这方面知识,就从入门开始吧.本文代码来自UFLDL tutorial. 1.函数分析 MATLAB代码,和UFLDL Tutorial对应.代码调用minFunc求解,可以先看第二部分再看此处. minFunc : uncontrained optimizer using a line search strategy.(注:虽然此处没有表明,但方法只能求解无约束凸优化

【转帖】UFLDL Tutorial(the main ideas of Unsupervised Feature Learning and Deep Learning)

UFLDL Tutorial From Ufldl Jump to: navigation, search Description: This tutorial will teach you the main ideas of Unsupervised Feature Learning and Deep Learning.  By working through it, you will also get to implement several feature learning/deep le

【转】机器学习和神经科学:你的大脑也在进行深度学习吗?

ps:这三个假说和人生风险投资结合其联想 假说一:大脑优化成本函数 The Brain Optimizes Cost Functions 假说二:不同脑区在发展的不同时期使用多样化的成本函数 Cost Functions Are Diverse across Areas and Change over Development 假说三:大脑中的专门系统高效解决关键计算问题 Specialized System Allow Efficient Solution of Key Computationa