ufldl学习笔记与编程作业:Feature Extraction Using Convolution,Pooling(卷积和池化抽取特征)

ufldl出了新教程,感觉比之前的好,从基础讲起,系统清晰,又有编程实践。

在deep learning高质量群里面听一些前辈说,不必深究其他机器学习的算法,可以直接来学dl。

于是最近就开始搞这个了,教程加上matlab编程,就是完美啊。

新教程的地址是:http://ufldl.stanford.edu/tutorial/

学习链接:

http://ufldl.stanford.edu/tutorial/supervised/FeatureExtractionUsingConvolution/

http://ufldl.stanford.edu/tutorial/supervised/Pooling/

http://ufldl.stanford.edu/tutorial/supervised/ExerciseConvolutionAndPooling/

卷积:用了matlab的conv2函数,这里用的有点挫。因为conv2算的是数学意义上的卷积,函数内部会把filter做180翻转,

而事实上我们不是想算数学意义上的卷积,只是简单算 “内积”,点对点相乘再求和。所以,我们得先把filter翻转,再传给conv2,就达到我们目的了。

事实上,我想,其实,反不反转,并不影响最终的结果的,因为毕竟W是调整出来的。

池化:这里池化的步长,跟poolDim相等,不会交叉。这里用了conv2来算均值,可以优化性能。记得,这里不需要激活函数!!!

这次练习较为简单。

不过几个matlab函数还是得简单总结一下:

conv2求卷积

squeeze把只有一个维度的那一维给去掉

rot90做90度旋转

reshape维度变换

运行结果:

这里练习主要是检测写的两个函数是否正确。

下面是主要代码:

cnnConvolve.m

function convolvedFeatures = cnnConvolve(filterDim, numFilters, images, W, b)
%cnnConvolve Returns the convolution of the features given by W and b with
%the given images
%
% Parameters:
%  filterDim - filter (feature) dimension
%  numFilters - number of feature maps
%  images - large images to convolve with, matrix in the form
%           images(r, c, image number)  % -------------注意维度的位置
%  W, b - W, b for features from the sparse autoencoder
%         W is of shape (filterDim,filterDim,numFilters)
%         b is of shape (numFilters,1)
%
% Returns:
%  convolvedFeatures - matrix of convolved features in the form
%                      convolvedFeatures(imageRow, imageCol, featureNum, imageNum) % ----------注意维度的位置

numImages = size(images, 3);
imageDim = size(images, 1); %行数,即是高度。 这里没算宽度,貌似默认高宽相等。
convDim = imageDim - filterDim + 1; % 卷积后,特征的高度

convolvedFeatures = zeros(convDim, convDim, numFilters, numImages);

% Instructions:
%   Convolve every filter with every image here to produce the
%   (imageDim - filterDim + 1) x (imageDim - filterDim + 1) x numFeatures x numImages
%   matrix convolvedFeatures, such that
%   convolvedFeatures(imageRow, imageCol, featureNum, imageNum) is the
%   value of the convolved featureNum feature for the imageNum image over
%   the region (imageRow, imageCol) to (imageRow + filterDim - 1, imageCol + filterDim - 1)
%
% Expected running times:
%   Convolving with 100 images should take less than 30 seconds
%   Convolving with 5000 images should take around 2 minutes
%   (So to save time when testing, you should convolve with less images, as
%   described earlier)

for imageNum = 1:numImages
  for filterNum = 1:numFilters

    % convolution of image with feature matrix
    convolvedImage = zeros(convDim, convDim);

    % Obtain the feature (filterDim x filterDim) needed during the convolution

    %%% YOUR CODE HERE %%%
    filter = W(:,:,filterNum);

    % Flip the feature matrix because of the definition of convolution, as explained later
    filter = rot90(squeeze(filter),2); %squeeze是把只有一个维度的那一维给去掉。这里就是把第三维给去掉,三维变二维。

    % Obtain the image
    im = squeeze(images(:, :, imageNum));

    % Convolve "filter" with "im", adding the result to convolvedImage
    % be sure to do a 'valid' convolution

    %%% YOUR CODE HERE %%%
    convolvedImage =conv2(im, filter,"valid");%加上valid参数,下面代码不要了。
    %conv2Dim = size(convolvedImage,1);
    %im_start = (conv2Dim - convDim+2)/2;
    %im_end = im_start+convDim-1;
    %convolvedImage = convolvedImage(im_start:im_end,im_start:im_end);%取中间部分

    % Add the bias unit
    % Then, apply the sigmoid function to get the hidden activation

    %%% YOUR CODE HERE %%%
    convolvedImage = convolvedImage.+b(filterNum);
    convolvedImage = sigmoid(convolvedImage);
    convolvedImage = reshape(convolvedImage,convDim, convDim, 1, 1);%2维变维4维

    convolvedFeatures(:, :, filterNum, imageNum) = convolvedImage;
  end
end

end

cnnPool.m

function pooledFeatures = cnnPool(poolDim, convolvedFeatures)
%cnnPool Pools the given convolved features
%
% Parameters:
%  poolDim - dimension of pooling region
%  convolvedFeatures - convolved features to pool (as given by cnnConvolve)
%                      convolvedFeatures(imageRow, imageCol, featureNum, imageNum)
%
% Returns:
%  pooledFeatures - matrix of pooled features in the form
%                   pooledFeatures(poolRow, poolCol, featureNum, imageNum)
%     

numImages = size(convolvedFeatures, 4);
numFilters = size(convolvedFeatures, 3);
convolvedDim = size(convolvedFeatures, 1);

pooledFeatures = zeros(convolvedDim / poolDim, ...
        convolvedDim / poolDim, numFilters, numImages);

% Instructions:
%   Now pool the convolved features in regions of poolDim x poolDim,
%   to obtain the
%   (convolvedDim/poolDim) x (convolvedDim/poolDim) x numFeatures x numImages
%   matrix pooledFeatures, such that
%   pooledFeatures(poolRow, poolCol, featureNum, imageNum) is the
%   value of the featureNum feature for the imageNum image pooled over the
%   corresponding (poolRow, poolCol) pooling region.
%
%   Use mean pooling here.

%%% YOUR CODE HERE %%%
filter = ones(poolDim);
for imageNum=1:numImages
	for filterNum=1:numFilters
		im = squeeze(squeeze(convolvedFeatures(:, :,filterNum,imageNum)));%貌似squeeze不要也可以
	    pooledImage =conv2(im, filter,"valid");
    	pooledImage = pooledImage(1:poolDim:end,1:poolDim:end);%取中间部分
    	pooledImage = pooledImage./(poolDim*poolDim);

    	%pooledImage = sigmoid(pooledImage); %不需要sigmoid
    	pooledImage = reshape(pooledImage,convolvedDim / poolDim, convolvedDim / poolDim, 1, 1);%2维变维4维

    	pooledFeatures(:, :, filterNum, imageNum) = pooledImage;
	end
end

end

本文作者:linger

本文链接:http://blog.csdn.net/lingerlanlan/article/details/38502627

ufldl学习笔记与编程作业:Feature Extraction Using Convolution,Pooling(卷积和池化抽取特征),布布扣,bubuko.com

时间: 2024-12-17 14:22:03

ufldl学习笔记与编程作业:Feature Extraction Using Convolution,Pooling(卷积和池化抽取特征)的相关文章

ufldl学习笔记与编程作业:Debugging: Gradient Checking(梯度检测)

ufldl出了新教程,感觉比之前的好,从基础讲起,系统清晰,又有编程实践. 在deep learning高质量群里面听一些前辈说,不必深究其他机器学习的算法,可以直接来学dl. 于是最近就开始搞这个了,教程加上matlab编程,就是完美啊. 新教程的地址是:http://ufldl.stanford.edu/tutorial/ 本节学习链接:http://ufldl.stanford.edu/tutorial/supervised/DebuggingGradientChecking/ 所谓梯度,

ufldl学习笔记与编程作业:Softmax Regression(vectorization加速)

ufldl出了新教程,感觉比之前的好,从基础讲起,系统清晰,又有编程实践. 在deep learning高质量群里面听一些前辈说,不必深究其他机器学习的算法,可以直接来学dl. 于是最近就开始搞这个了,教程加上matlab编程,就是完美啊. 新教程的地址是:http://ufldl.stanford.edu/tutorial/ 本节是对ufldl学习笔记与编程作业:Softmax Regression(softmax回归)版本的改进. 哈哈,把向量化的写法给写出来了,尼玛好快啊.只需要2分钟,2

ufldl学习笔记与编程作业:Logistic Regression(逻辑回归)

ufldl出了新教程,感觉比之前的好,从基础讲起,系统清晰,又有编程实践. 在deep learning高质量群里面听一些前辈说,不必深究其他机器学习的算法,可以直接来学dl. 于是最近就开始搞这个了,教程加上matlab编程,就是完美啊. 新教程的地址是:http://ufldl.stanford.edu/tutorial/ 本节学习链接:http://ufldl.stanford.edu/tutorial/supervised/LogisticRegression/ 有了线性回归的基础再来学

ufldl学习笔记和编程作业:Softmax Regression(softmax回报)

ufldl学习笔记与编程作业:Softmax Regression(softmax回归) ufldl出了新教程.感觉比之前的好,从基础讲起.系统清晰,又有编程实践. 在deep learning高质量群里面听一些前辈说,不必深究其它机器学习的算法,能够直接来学dl. 于是近期就開始搞这个了.教程加上matlab编程,就是完美啊. 新教程的地址是:http://ufldl.stanford.edu/tutorial/ 本节学习链接:http://ufldl.stanford.edu/tutoria

ufldl学习笔记与编程作业:Multi-Layer Neural Network(多层神经网络+识别手写体编程)

ufldl学习笔记与编程作业:Multi-Layer Neural Network(多层神经网络+识别手写体编程) ufldl出了新教程,感觉比之前的好,从基础讲起,系统清晰,又有编程实践. 在deep learning高质量群里面听一些前辈说,不必深究其他机器学习的算法,可以直接来学dl. 于是最近就开始搞这个了,教程加上matlab编程,就是完美啊. 新教程的地址是:http://ufldl.stanford.edu/tutorial/ 本节学习地址:http://ufldl.stanfor

ufldl学习笔记与编程作业:Vectorization(向量化/矢量化)

ufldl出了新教程,感觉比之前的好,从基础讲起,系统清晰,又有编程实践. 在deep learning高质量群里面听一些前辈说,不必深究其他机器学习的算法,可以直接来学dl. 于是最近就开始搞这个了,教程加上matlab编程,就是完美啊. 新教程的地址是:http://ufldl.stanford.edu/tutorial/ 本节学习链接:http://ufldl.stanford.edu/tutorial/supervised/Vectorization/ 什么是向量化?怎么样向量化? 简单

ufldl学习笔记与编程作业:Softmax Regression(softmax回归)

ufldl出了新教程,感觉比之前的好,从基础讲起,系统清晰,又有编程实践. 在deep learning高质量群里面听一些前辈说,不必深究其他机器学习的算法,可以直接来学dl. 于是最近就开始搞这个了,教程加上matlab编程,就是完美啊. 新教程的地址是:http://ufldl.stanford.edu/tutorial/ 本节学习链接:http://ufldl.stanford.edu/tutorial/supervised/SoftmaxRegression/ softmax回归其实是逻

ufldl学习笔记与编程作业:Convolutional Neural Network(卷积神经网络)

ufldl出了新教程,感觉比之前的好,从基础讲起,系统清晰,又有编程实践. 在deep learning高质量群里面听一些前辈说,不必深究其他机器学习的算法,可以直接来学dl. 于是最近就开始搞这个了,教程加上matlab编程,就是完美啊. 新教程的地址是:http://ufldl.stanford.edu/tutorial/ 本节学习地址:http://ufldl.stanford.edu/tutorial/supervised/ConvolutionalNeuralNetwork/ 一直没更

ufldl学习笔记与编程作业:Linear Regression(线性回归)

ufldl出了新教程,感觉比之前的好.从基础讲起.系统清晰,又有编程实践. 在deep learning高质量群里面听一些前辈说.不必深究其它机器学习的算法.能够直接来学dl. 于是近期就開始搞这个了,教程加上matlab编程,就是完美啊. 新教程的地址是:http://ufldl.stanford.edu/tutorial/ 本节学习链接:http://ufldl.stanford.edu/tutorial/supervised/LinearRegression/ 从一个最简单的线性回归,能够