What are the advantages of ReLU over sigmoid function in deep neural network?

The state of the art of non-linearity is to use ReLU instead of sigmoid function in deep neural network, what are the advantages?

I know that training a network when ReLU is used would be faster, and it is more biological inspired, what are the other advantages? (That is, any disadvantages of using sigmoid)?

Best answer in stackexchange:

Two additional major benefits of ReLUs are sparsity and a reduced likelihood of vanishing gradient. But first recall the definition of a ReLU is h=max(0,a)h=max(0,a) where a=Wx+ba=Wx+b.

One major benefit is the reduced likelihood of the gradient to vanish. This arises when a>0a>0. In this regime the gradient has a constant value. In contrast, the gradient of sigmoids becomes increasingly small as the absolute value of x increases. The constant gradient of ReLUs results in faster learning.

The other benefit of ReLUs is sparsity. Sparsity arises when a≤0a≤0. The more such units that exist in a layer the more sparse the resulting representation. Sigmoids on the other hand are always likely to generate some non-zero value resulting in dense representations. Sparse representations seem to be more beneficial than dense representations.

Reference: http://stats.stackexchange.com/questions/126238/what-are-the-advantages-of-relu-over-sigmoid-function-in-deep-neural-network

ReLU

ReLU的全称是rectified linear unit。上面的回答基本上涵盖了它胜过sigmoid function的几个方面:

  1. faster
  2. more biological inspired
  3. sparsity
  4. less chance of vanishing gradient (梯度消失问题)

早期使用sigmoid或tanh激活函数的DL在做unsupervised learning时因为 gradient vanishing problem 的问题会无法收敛。ReLU则这没有这个问题。

时间: 2024-12-20 14:30:14

What are the advantages of ReLU over sigmoid function in deep neural network?的相关文章

Sigmoid function in NN

X = [ones(m, 1) X]; temp = X * Theta1'; t = size(temp, 1); temp = [ones(t, 1) temp]; h = temp * Theta2'; [max_num, p] = max(h, [], 2); Without Sigmoid function, Training Set Accuracy: 69.620000 1 X = [ones(m, 1) X]; 2 temp = X * Theta1'; 3 temp = sig

S函数 Sigmoid Function or Logistic Function

octave代码 x = -10:0.1:10; y = zeros(length(x), 1); for i = 1:length(x) y(i) = 1 / (1 + exp(-x(i))); end figure; plot(x, y, '-b', 'LineWidth', 2); S函数 Sigmoid Function or Logistic Function,布布扣,bubuko.com

sigmoid function的直观解释

Sigmoid function也叫Logistic function, 在logistic regression中扮演将回归估计值h(x)从 [-inf, inf]映射到[0,1]的角色. 公式为:g(z) = 1 / (1 + exp(-z)) 如图: 其输出值大于0.5这认为待分类对象属于1,否则则属于0. 这个值得直观意义便是结果预测正确的概率. 例如:当sigmoid(h(x)) = 0.7时,表示特征为x的对象属于1的概率为0.7,为0的概率为0.3.

ReLU 和sigmoid 函数对比

详细对比请查看:http://www.zhihu.com/question/29021768/answer/43517930 . 激活函数的作用: 是为了增加神经网络模型的非线性.否则你想想,没有激活函数的每层都相当于矩阵相乘.就算你叠加了若干层之后,无非还是个矩阵相乘罢了.所以你没有非线性结构的话,根本就算不上什么神经网络. 2. 为什么ReLU效果好: 重点关注这章6.6节:Piecewise Linear Hidden Unitshttp://www.iro.umontreal.ca/~b

神经网络中的激活函数具体是什么?为什么Relu要好过与tanh和sigmoid function

为什么要引入激活函数? 如果不用激活函数(其实相当于激励函数是f(x)=x),在这种情况下你每一层输出都是上层输入的线性函数,很容易验证,无论你神经网络有多少层,输出都是输入的线性组合,与没有隐藏层效果相当,这种情况就是最原始的感知机了. 正因为上面的原因,我们决定引入非线性函数作为激励函数,这样深层神经网络就有意义了(不再是是输入的线性组合,可以逼近任意函数).最早的想法是sigmoid函数或者tanh函数,输出有界,很容易充当下一层输入.激活函数的作用是为了增加神经网络模型的非线性.否则你想

小白学习之pytorch框架(5)-多层感知机(MLP)-(tensor、variable、计算图、ReLU()、sigmoid()、tanh())

先记录一下一开始学习torch时未曾记录(也未好好弄懂哈)导致又忘记了的tensor.variable.计算图 计算图 计算图直白的来说,就是数学公式(也叫模型)用图表示,这个图即计算图.借用 https://hzzone.io/cs231n/%E7%90%86%E8%A7%A3-PyTorch-%E8%AE%A1%E7%AE%97%E5%9B%BE%E3%80%81Autograd-%E6%9C%BA%E5%88%B6%E5%92%8C%E5%AE%9E%E7%8E%B0%E7%BA%BF%E

sigmoid function和softmax function

sigmoid函数(也叫逻辑斯谛函数):  引用wiki百科的定义: A logistic function or logistic curve is a common "S" shape (sigmoid curve). 其实逻辑斯谛函数也就是经常说的sigmoid函数,它的几何形状也就是一条sigmoid曲线. logistic曲线如下:  同样,我们贴一下wiki百科对softmax函数的定义: softmax is a generalization of logistic fu

[C4] Andrew Ng - Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization

About this Course This course will teach you the "magic" of getting deep learning to work well. Rather than the deep learning process being a black box, you will understand what drives performance, and be able to more systematically get good res

[C3] Andrew Ng - Neural Networks and Deep Learning

About this Course If you want to break into cutting-edge AI, this course will help you do so. Deep learning engineers are highly sought after, and mastering deep learning will give you numerous new career opportunities. Deep learning is also a new "s