Deep Neural Networks的Tricks

Here we will introduce these extensive implementation details, i.e., tricks or tips, for building and training your own deep networks.

主要以下面八个部分展开介绍:

mainly in eight
aspects: 1) data
augmentation; 2) pre-processing
on images; 3) initializations
of Networks; 4) some
tips during training; 5) selections
of activation functions; 6) diverse
regularizations; 7)some
insights found from figures and finally 8) methods
of ensemble multiple deep networks.

1,数据扩增

2.预处理数据

3.初始化网络

4,在训练中的一些tips

5,合理的选择激活函数

6.多种正则化

7,从实验图和结果发现insights

8,如何集合多个网络

依次介绍八种方法:

一、data augmentation

1.  th additiarhorizontally
flipping(水平翻转), random crops(随机切割) and color jittering(颜色抖动)
. Moreover, you could try combinations of multiple different processing, e.g., doing the rotation and random scaling at the same time. In addition,
you can try to raise saturation and value (S and V components of the HSV color space) of all pixels to a power between 0.25 and 4 (same for all pixels within a patch), multiply these values
by a factor between 0.7 and 1.4, and add to them a value between -0.1 and 0.1. Also, you could add a value between [-0.1, 0.1] to the hue (H component of HSV) of all pixels in the image/patch.

2、Krizhevsky et
al. [1] proposed fancy
PCA。you can firstly perform PCA on the set of RGB pixel values throughout your training images. add the
following quantity to each RGB image pixel (i.e., ):  where,  and  are
the -th
eigenvector and eigenvalue of the  covariance
matrix of RGB pixel values, respectively, and  is
a random variable drawn from a Gaussian with mean zero and standard deviation 0.1. 。。

二、Pre-processing

1、The
first and simple pre-processing approach is zero-center the data,
and then normalize them。

code:
>>> X -= np.mean(X, axis = 0) # zero-center
>>> X /= np.std(X, axis = 0) # normalize

2、re-processing
approach similar to the first one is PCA Whitening. 

>>> X -= np.mean(X, axis = 0) # zero-center
>>> cov = np.dot(X.T, X) / X.shape[0] # compute the covariance matrix
>>> U,S,V = np.linalg.svd(cov) # compute the SVD factorization of the data covariance matrix
>>> Xrot = np.dot(X, U) # decorrelate the data
>>> Xwhite = Xrot / np.sqrt(S + 1e-5) # divide by the eigenvalues (which are square roots of the singular values)

上面两种方法:these transformations are not used with Convolutional Neural Networks. However, it
is also very important to zero-center the
data, and it is common to see normalization of every pixel as well.

三、初始化-Initialization

1.All Zero Initialization---假如全部权值都设为0或相同的数,就会计算相同梯度和相同的参数更新,即没有对称性

In the ideal situation, with proper data normalization it is reasonable to assume that approximately half of the weights will be positive
and half of them will be negative. A reasonable-sounding idea then might be to set all the initial weights to zero, which you expect to be the “best guess” in expectation. But,
this turns out to be a mistake, because if every neuron in the network computes the same output, then they will also all compute the same gradients during back-propagation and undergo the exact same parameter updates. In other
words, there is no source of asymmetry between neurons if their weights are initialized to be the same.

2、Initialization with Small Random Numbers

依据:仍然期望各参数接近0,符合对称分布,选取 来设各个参数,但最后的效果没有实质性提高。

Thus, you still want the weights to be very close to zero, but not identically zero. In this way, you can random these neurons to small numbers which are very close to zero, and
it is treated as symmetry breaking. The idea is that the neurons are all random and unique
in the beginning, so they will compute distinct updates and integrate themselves as diverse parts of the full network. The implementation for weights might simply look like ,
where  is
a zero mean, unit standard deviation gaussian. It is also possible to use small numbers drawn from a uniform distribution, but this seems to have relatively little impact on the final performance in practice.

3、Calibrating the Variances 调整各个方差,每个细胞源输出的方差归到1,通过除以输入源的个数的平方

One problem with the above suggestion is that the distribution of the outputs from a randomly initialized neuron has a variance that grows with the number of inputs. It turns out thatyou can normalize the variance of each neuron‘s
output to 1 by scaling its weight vector by the square root of its fan-in (i.e., its number of inputs), which is as follows:

>>> w = np.random.randn(n) / sqrt(n) # calibrating the variances with 1/sqrt(n)

4、Current Recommendation 当前流行的方法。是文献[4]神经元方差设定为2/n.n是输入个数。所以对权值w的处理是,正态分布上的采样数乘以sqrt(2.0/n)

As aforementioned, the previous initialization by calibrating the variances of neurons is without considering ReLUs. A more recent paper on this topic by He et al[4] derives
an initialization specifically for ReLUs, reaching the conclusion that the variance of neurons in the network should be  as:

>>> w = np.random.randn(n) * sqrt(2.0/n) # current recommendation
  1. K. He, X. Zhang, S. Ren, and J. Sun. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. InICCV, 2015.

四、训练过程

1、Filters and pooling size. 滤波子大小和尺化大小的设定

the size of input images prefers to be power-of-2, such as32 (e.g., CIFAR-10),
64, 224 (e.g., common used ImageNet), 384 or 512, etc. Moreover, it is important to employ asmall filter (e.g., )
and small strides (e.g., 1) with zeros-padding, which not only reduces the number of parameters, but improves the accuracy rates of the whole deep network. Meanwhile, a special case mentioned above, i.e.,  filters
with stride 1, could preserve the spatial size of images/feature maps. For the pooling layers, the common usedpooling size is of .

2、Learning rate
建议学习率,gradients除以batch size 。在没有改变mini batch时,最好别改变lr.

开始lr设定为0.1~~利用validation set来确定Lr,再每次除以2或5

In addition, as described in a blog by Ilya Sutskever [2],
he recommended to divide the gradients by mini batch size. Thus, you should not always change the learning rates (LR), if you change the mini batch size. For obtaining an appropriate LR, utilizing the validation set is an effective way. Usually, a typical
value of LR in the beginning of your training is 0.1. In practice, if you see that you stopped making progress on the validation set, divide the LR by 2 (or by 5), and keep going, which might give you a surprise.

3、Fine-tune on pre-trained
models,微调和预训练,直接利用已经公布的一些模型:Caffe Model Zoo and VGG
Group

结合这些模型用于新的数据集上,需要fine-tune,需要考虑两个重要因子:数据集大小和与原数据的相似度。

For further improving the classification performance on your data set,
a very simple yet effective approach is to fine-tune the pre-trained models on your own data. As shown in following table, the two most important factors are the size of the new data set (small
or big), and its similarity to the original data set. Different strategies of fine-tuning can be utilized in different situations. For instance, a good case is that your new data set is very similar to the data used for training
pre-trained models. In that case, if you have very little data, you can just train a linear classifier on the features extracted from the top layers of pre-trained models. 微调分两种情况:第一种:如果新数据集少,且分布类似预训练的库(现实是残酷的,不太可能),只需要调整最后一层的线性分类器。

If your have quite a lot of data at hand, please fine-tune a few top layers
of pre-trained models with a small learning rate.

如果有很多数据,就用小的LR调整模块的最后几层

However, if your own data set is quite different from the data used in
pre-trained models but with enough training images, a large number of layers should be fine-tuned on your data also with a small learning rate for improving performance.

如果你的数据与预模型不同,但数量充足,用一个小的Lr对很多层进行调整

However, if your data set not only contains little data, but is very different
from the data used in pre-trained models, you will be in trouble. Since the data is limited, it seems better to only train a linear classifier. Since the data set is very different, it might not be best to train the classifier from the top of the network,
which contains more dataset-specific features. Instead, it might work better to train the SVM classifier on activations/features from somewhere earlier in the network.

假如,数据少且不同与源数据模型,这就会很复杂。仅仅靠训练分类器肯定不行。也许可以对网络中前几层的激活层和特征层做SVM分类器训练。

五、 selections
of activation functions;合理选择激活函数

One of the crucial factors in deep networks is activation
function, which brings the non-linearity into
networks. Here we will introduce the details and characters of some popular activation functions and give advices later in this section.

本图取之:http://cs231n.stanford.edu/index.html

几种激活函数:

Sigmoid:

The sigmoid non-linearity has the mathematical form .
It takes a real-valued number and “squashes” it into range between 0 and 1. In particular, large negative numbers become 0 and large positive numbers become 1. The sigmoid function has seen frequent use historically since it has a nice interpretation as the
firing rate of a neuron: from not firing at all (0) to fully-saturated firing at an assumed maximum frequency (1).在最大阈值1时,就达到饱和--Saturated.

sigmoid已经失宠,因为他的两个缺点:

(1).Sigmoids saturate and kill gradients. 
由于饱和而失去了梯度

因为在when the neuron‘s activation saturates at either tail of 0 or 1, the gradient at these
regions is almost zero。看图就知道,整个曲线的倾斜角度,在两端倾斜角都是平的。

关键的问题在于this
(local) gradient will be multiplied to the gradient of this gate‘s output for the whole objective。这样就会因为local gradient 太小,而it will effectively “kill”  the gradient
and almost no signal will flow through the neuron to its weights

and
recursively to its data. 影响到梯度,导致没有信号能通过神经元传递给权值。而且还需要小心关注初始权值,one must pay extra caution when initializing the
weights of sigmoid neurons to prevent saturation. For example, if the initial weights are too large then most neurons would become saturated and the network will barely learn.因为初始的权值太大,就会让神经元直接饱和,整个网络难以学习。

(2)
.Sigmoid outputs are not zero-centered. 不是以0为中心

This
is undesirable since neurons in later layers of processing in a Neural Network (more on this soon) would be receiving data that isnot zero-centered. This has implications on the dynamics duringgradient
descent, because if the data coming into a neuron is always positive(e.g.,  element
wise in ),
then the gradient on the weights  will
during back-propagation become either all be positive, or all negative(depending on the gradient of the whole expression ).

这样在后几层网络中接受的值也不是0中心,这样在动态梯度下降法中,如果进入nueron中的数据都是正的,那么整个权值梯度w要不全为正,或者全为负(取决于f的表达形式)。

This
could introduce undesirable zig-zagging dynamics in the gradient updates for the weights. However, notice that once these gradients are added up across a batch of data the final update for the weights can have variable signs, somewhat mitigating this issue.
Therefore, this is an inconvenience but it has less severe consequences compared to the saturated activation problem above.

这回导致锯齿状的动态梯度,但如果在一个batch数据中将梯度求和来更新权值,有可能会相互抵消,从而缓解上诉的影响。这笔饱和激活带来的影响要轻太多了!

Tanh(x)

The
tanh non-linearity squashes a real-valued number to the range [-1, 1]. Like the sigmoid neuron, its activations saturate, but unlike the sigmoid neuron its output is zero-centered. Therefore, in practice the tanh non-linearity is always preferred to the sigmoid
nonlinearity.

tanh的作用是将真个实数数据放到了[-1,1]之间,他的激活依旧是饱和状态,但他的输出是0中心。

Rectified Linear Unit

The
Rectified Linear Unit (ReLU) has become very popular in the last few years. It computes the function ,
which is simply thresholded at zero.

Relu 有一些优点和缺点:

There are several pros and cons to using the ReLUs:

  1. (Pros) Compared to sigmoid/tanh neurons that involve expensive operations (exponentials, etc.), the ReLU can be implemented by simply thresholding a matrix of activations at zero. Meanwhile, ReLUs does not suffer from saturating.

    运算简单,非指数形式,切不会饱和

  2. (Pros) It was found to greatly accelerate (e.g., a factor of 6 in [1])
    the convergence of stochastic gradient descent compared to the sigmoid/tanh functions. It is argued that this is due to its linear, non-saturating form.

    已被证明可以加速随机梯度收敛,被认为是由于其线性和非饱和形式(有待考证)

  3. (Cons) Unfortunately, ReLU units can be fragile during training and can “die”. For example, a large gradient flowing through a ReLU neuron could cause the weights to update in such a way that the neuron will never activate on any datapoint again. If
    this happens, then the gradient flowing through the unit will forever be zero from that point on. That is, the ReLU units can irreversibly die during training since they can get knocked off the data manifold.
    For example, you may find that as much as 40% of your network can be “dead” (i.e., neurons that never activate across the entire training dataset) if the learning rate is set too high. With a proper setting of the learning rate this is less frequently an issue.

缺点:Relu Unit在训练中容易die,例如一个大的梯度流过nueron,会导致部分unit一直为0,例如,lr设置很高时,你的网络又40%的neuro未被激活。

Leaky
ReLUs are one attempt to fix the “dying ReLU” problem. Instead of the function being zero when ,
a leaky ReLU will instead have a small negative slope (of 0.01, or so). That is, the function computes  if  and  if ,
where  is
a small constant. Some people report success with this form of activation function, but the results are not always consistent.  修改了x<0部分,设定为了一个常数a,

后续又出来了一连串的RELU的修改:

ReLU, Leaky ReLU, PReLU and RReLU. In these figures, for PReLU,  is
learned and for Leaky ReLU  is
fixed. For RReLU,  is
a random variable keeps sampling in a given range, and remains fixed in testing.

PReLU的a是学习得到,RReLU的a是随机采样变换。在测试中是固定。

  1. B. Xu, N. Wang, T. Chen, and M. Li. Empirical Evaluation of Rectified Activations in Convolution Network. In ICML Deep Learning
    Workshop
    , 2015.

文献给出了各个激活函数的表现:

From these tables, we can find the performance of ReLU is not the best for all the three data sets.
For Leaky ReLU, a larger slope  will
achieve better accuracy rates. PReLU is easy to overfit on small data sets (its training error is the smallest, while testing error is not satisfactory), but still outperforms ReLU. In addition, RReLU is significantly better than other activation functions
on NDSB, which shows RReLU can overcome overfitting, because this data set has less training data than that of CIFAR-10/CIFAR-100. In conclusion, three types of ReLU variants
all consistently outperform the original ReLU in these three data sets. And PReLU and RReLU seem better choices. Moreover, He 
et al. also reported similar conclusions in [4]

[4]K. He, X. Zhang, S. Ren, and J. Sun. Delving
Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
. InICCV,
2015.

Regularizations

There are several ways of controlling the capacity of Neural Networks to prevent overfitting:

  • L2 regularization is perhaps the most common form of regularization. It can be implemented by penalizing the squared magnitude of all parameters directly in the objective. That is, for every weight  in
    the network, we add the term  to the objective,
    where is the regularization strength. It is common to see the
    factor of  in front because then the gradient of this term
    with respect to the parameter  is simply  instead
    of . The L2 regularization has the intuitive interpretation
    of heavily penalizing peaky weight vectors and preferring diffuse weight vectors.
  • L1 regularization is another relatively common form of regularization, where for each weight  we
    add the term  to the objective.
    It is possible to combine the L1 regularization with the L2 regularization:  (this
    is called Elastic net regularization). The L1 regularization has the
    intriguing property that it leads the weight vectors to become sparse during optimization (i.e. very close to exactly zero). In other words, neurons with L1 regularization end up using only a sparse subset of their most important inputs and become nearly invariant
    to the “noisy”  inputs. In comparison, final weight vectors from L2 regularization are usually diffuse, small numbers. In practice, if you are not concerned with explicit feature selection, L2 regularization can be expected to give
    superior performance over L1.
  • Max norm constraints. Another form of regularization is to enforce an absolute upper bound on the magnitude of the weight vector for every neuron and use projected gradient descent to enforce the constraint. In practice, this corresponds to
    performing the parameter update as normal, and then enforcing the constraint by clamping the weight vector  of
    every neuron to satisfy . Typical values
    of  are on orders of 3 or 4. Some people report improvements when using this
    form of regularization. One of its appealing properties is that network cannot “explode” even when the learning rates are set too high because the updates are always bounded.
  • Dropout is an extremely effective, simple and recently introduced regularization technique by Srivastava et al. in [6] that
    complements the other methods (L1, L2, maxnorm). During training, dropout can be interpreted as sampling a Neural Network within the full Neural Network, and only updating the parameters of the sampled network based on the input data. (However, the exponential
    number of possible sampled networks are not independent because they share the parameters.) During testing there is no dropout applied, with the interpretation of evaluating an averaged prediction across the exponentially-sized ensemble of all sub-networks
    (more about ensembles in the next section). In practice, the value of dropout ratio  is
    a reasonable default, but this can be tuned on validation data.

N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A Simple Way to Prevent Neural
Networks from Overfitting
. JMLR, 15(Jun):1929?1958, 2014.

七:ome insights found from figures and finally训练中重要的图

1.As we have known, the learning rate is very sensitive. From Fig. 1 in the following, a very high
learning rate will cause a quite strange loss curve. A low learning rate will make your training loss decrease very slowly even after a large number of epochs. In contrast, a high learning rate will make training loss decrease fast at the beginning, but it
will also drop into a local minimum. Thus, your networks might not achieve a satisfactory results in that case. For a good learning rate, as the red line shown in Fig. 1, its loss curve performs smoothly and finally it achieves the best performance.

不同的LR设定会带来不同的Loss效果,需要合理的选择一个lr

2.

  • Now let’s zoom in the loss curve. The epochs present the number of times for training once on the training data, so there are multiple mini batches in each epoch. If we draw the classification loss every training batch, the curve performs like Fig. 2. Similar
    to Fig. 1, if the trend of the loss curve looks too linear, that indicates your learning rate is low; if it does not decrease much, it tells you that the learning rate might be too high. Moreover, the “width” of the curve is related to the batch size. If the 
    “width” looks too wide, that is to say the variance between every batch is too large, which points out you should increase the batch size.

    每一个epoch中又多个batchsize的循环。下图纵坐标是loss,横坐标是epoch,每一个epoch纵向蓝色直线就是 一个循环epoch内每个Batchsize对应的Loss。如果看起来太线性说明Lr太低,如果没有降低太多,说明Lr太高。整个蓝色直线的width与batch size的大小有关,如果她看着太宽了,可能就需要增加batch size, 这样就会降低width==num/batchsize.

3.

  • Another tip comes from the accuracy curve. As shown in Fig. 3, the red line is the training accuracy, and the green line is the validation one. When the validation accuracy converges, the gap between the red line and the green one will show the effectiveness
    of your deep networks. If the gap is big, it indicates your network could get good accuracy on the training data, while it only achieve a low accuracy on the validation set. It is obvious that your deep model overfits on the training set. Thus, you should
    increase the regularization strength of deep networks. However, no gap meanwhile at a low accuracy level is not a good thing, which shows your deep model has low learnability. In that case, it is better to increase the model capacity for better results.再说test
    data与 validation data之间的关系,gap太大,就导致了overfitting。

八:Ensemble

In machine learning, ensemble methods [8] that train multiple learners and
then combine them for use are a kind of state-of-the-art learning approach. It is well known that an ensemble is usually significantly more accurate than a single learner, and ensemble methods have already achieved great success in many real-world tasks. In
practical applications, especially challenges or competitions, almost all the first-place and second-place winners used ensemble methods.

Here we introduce several skills for ensemble in the deep learning scenario.

  • Same model, different initialization. Use cross-validation to determine the best hyperparameters, then train multiple models with the best set of hyperparameters but with different random initialization. The danger with this approach is that
    the variety is only due to initialization.
  • Top models discovered during cross-validation. Use cross-validation to determine the best hyperparameters, then pick the top few (e.g., 10) models to form the ensemble. This improves the variety of the ensemble but has the danger of including
    suboptimal models. In practice, this can be easier to perform since it does not require additional retraining of models after cross-validation. Actually, you could directly select several state-of-the-art deep models from Caffe
    Model Zoo
     to perform ensemble.
  • Different checkpoints of a single model. If training is very expensive, some people have had limited success in taking different checkpoints of a single network over time (for example after every epoch) and using those to form an ensemble.
    Clearly, this suffers from some lack of variety, but can still work reasonably well in practice. The advantage of this approach is that is very cheap.
  • Some practical examples. If your vision tasks are related to high-level image semantic, e.g., event recognition from still images, a better ensemble method is to employ multiple deep models trained on different data sources to extract different
    and complementary deep representations. For example in the Cultural Event Recognition challenge in
    associated with ICCV’15, we utilized five different deep models trained on images of ImageNetPlace
    Database
     and the cultural images supplied by the competition organizers. After that, we extracted five complementary deep features
    and treat them as multi-view data. Combining “early fusion” and “late fusion” strategies described in [7],
    we achieved one of the best performance and ranked the 2nd place in that challenge. Similar to our work,[9] presented
    the Stacked NN framework to fuse more deep networks at the same time.

九、混杂

In real world applications, the data is usually class-imbalanced: some classes have
a large number of images/training instances, while some have very limited number of images.

类别不平均问题: 一些类拥有大量的训练数据,一类数据量有限

As discussed in a recent technique report [10],
when deep CNNs are trained on these imbalanced training sets, the results show that imbalanced training data can potentially have a severely negative impact on overall performance in deep networks.

不平衡的训练数据对整个网络有负面效果

For this issue, the simplest method is to balance the training data by directly up-sampling and down-sampling the imbalanced data, which is shown in [10].

一种解决方法:直接上采样和下采样数据,

Another interesting solution is one kind of special crops processing in our challenge solution [7].
Because the original cultural event images are imbalanced, we merely extract crops from the classes which have a small number of training images, which on one hand can supply diverse data sources, and on the other hand can solve the class-imbalanced problem.

另一种:解决办法如文献{7}中,采用剪切的办法

[7]X.-S. Wei, B.-B. Gao, and J. Wu. Deep
Spatial Pyramid Ensemble for Cultural Event Recognition
. In ICCV ChaLearn Looking at People Workshop,
2015.

In addition, you can adjust the fine-tuning strategy for overcoming class-imbalance. For example, you can divide your own data set into two parts: one contains the
classes which have a large number of training samples (images/crops); the other contains the classes of limited number of samples. In each part, the class-imbalanced problem will be not very serious. At the beginning of fine-tuning on your data set, you firstly
fine-tune on the classes which have a large number of training samples (images/crops), and secondly, continue to fine-tune but on the classes with limited number samples.

第三种方法:采用fine-tuning策略,将数据分割为两个部分,大数据集合小数据集,先微调大的类,再微调小类。

[10] P. Hensman, and D. Masko. The
Impact of Imbalanced Training Data for Convolutional Neural Networks
Degree Project in Computer
Science
, DD143X, 2015.

References & Source Links

  1. A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet Classification
    with Deep Convolutional Neural Networks
    . In NIPS, 2012
  2. A Brief Overview of Deep Learning, which is a guest post by Ilya
    Sutskever
    .
  3. CS231n: Convolutional Neural Networks for Visual Recognition of Stanford University, held by Prof.
    Fei-Fei Li
     and Andrej Karpathy.
  4. K. He, X. Zhang, S. Ren, and J. Sun. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification.
    In ICCV, 2015.
  5. B. Xu, N. Wang, T. Chen, and M. Li. Empirical Evaluation of Rectified Activations in Convolution Network. In ICML Deep Learning
    Workshop
    , 2015.
  6. N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A Simple Way to Prevent Neural
    Networks from Overfitting
    JMLR, 15(Jun):1929?1958, 2014.
  7. X.-S. Wei, B.-B. Gao, and J. Wu. Deep Spatial Pyramid Ensemble for Cultural Event Recognition. In ICCV
    ChaLearn Looking at People Workshop
    , 2015.
  8. Z.-H. Zhou. Ensemble Methods: Foundations and AlgorithmsBoca Raton,
    FL: Chapman & Hall
    CRC/, 2012. (ISBN 978-1-439-830031)
  9. M. Mohammadi, and S. Das. S-NN: Stacked Neural Networks. Project in Stanford
    CS231n Winter Quarter
    , 2015.
  10. P. Hensman, and D. Masko. The Impact of Imbalanced Training Data for Convolutional Neural NetworksDegree
    Project in Computer Science
    , DD143X, 2015.

参考:http://blog.csdn.net/pandav5/article/details/51178032

http://blog.csdn.net/u010025211/article/details/51202236

时间: 2024-11-12 06:38:08

Deep Neural Networks的Tricks的相关文章

Deep Neural Networks的Tricks~~翻译版~~精华

Here we will introduce these extensive implementation details, i.e., tricks or tips, for building and training your own deep networks. 主要以下面八个部分展开介绍: mainly in eight aspects: 1) data augmentation; 2) pre-processing on images; 3) initializations of Ne

[C4] Andrew Ng - Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization

About this Course This course will teach you the "magic" of getting deep learning to work well. Rather than the deep learning process being a black box, you will understand what drives performance, and be able to more systematically get good res

On Explainability of Deep Neural Networks

On Explainability of Deep Neural Networks « Learning F# Functional Data Structures and Algorithms is Out! On Explainability of Deep Neural Networks During a discussion yesterday with software architect extraordinaire David Lazar regarding how everyth

(转)Understanding, generalisation, and transfer learning in deep neural networks

Understanding, generalisation, and transfer learning in deep neural networks FEBRUARY 27, 2017 This is the first in a series of posts looking at the 'top 100 awesome deep learning papers.' Deviating from the normal one-paper-per-day format, I'll take

Mastering the game of Go with deep neural networks and tree search

Silver, David, et al. "Mastering the game of Go with deep neural networks and tree search." Nature 529.7587 (2016): 484-489. Alphago的论文,主要使用了RL的技术,不知道之前有没有用RL做围棋的. 提出了两个网络,一个是策略网络,一个是价值网络,均是通过自我对战实现. 策略网络: 策略网络就是给定当前棋盘和历史信息,给出下一步每个位置的概率.以前的人似乎是用

Training Deep Neural Networks

http://handong1587.github.io/deep_learning/2015/10/09/training-dnn.html  //转载于 Training Deep Neural Networks Published: 09 Oct 2015  Category: deep_learning Tutorials Popular Training Approaches of DNNs?—?A Quick Overview https://medium.com/@asjad/po

Why are Eight Bits Enough for Deep Neural Networks?

Why are Eight Bits Enough for Deep Neural Networks? Deep learning is a very weird technology. It evolved over decades on a very different track than the mainstream of AI, kept alive by the efforts of a handful of believers. When I started using it a

Introduction to Deep Neural Networks

Introduction to Deep Neural Networks Neural networks are a set of algorithms, modeled loosely after the human brain, that are designed to recognize patterns. They interpret sensory data through a kind of machine perception, labeling or clustering raw

论文阅读--Scalable Object Detection using Deep Neural Networks

Scalable Object Detection using Deep Neural Networks 作者: Dumitru Erhan, Christian Szegedy, Alexander Toshev, and Dragomir Anguelov 引用: Erhan, Dumitru, et al. "Scalable object detection using deep neural networks." Proceedings of the IEEE Confere