课程二(Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization),第一周(Practical aspects of Deep Learning) —— 1.Practice Questions : Practical aspects of deep learning

【中文翻译】

4、你正在一个超市的自动退房亭工作, 并正在建设一个苹果, 香蕉和桔子分类器。假设您的分类器训练集误差是 0.5%, 并且验证集误差为7%。下面哪些是有希望改进分类器的?(检查所有适用的)(A,C)

(A)增加正则化参数 lambda

(B)降低正则化参数 lambda

(C)获取更多训练数据

(D)使用更大的神经网络

【解释】

只要正则适度,通常构建一个更大的网络便可以,在不影响方差的同时减少偏差,而采用更多数据通常可以在不过多影响偏差的同时减少方差

【中文翻译】

5.什么是权重衰减?

(1)在训练过程中逐渐降低学习速度。

(2)如果对噪声数据进行训练, 就会逐渐损坏神经网络中的权重。

(3)一种正则化技术 (如 L2 正规化), 导致梯度下降在每次迭代中收缩权重。

(4)通过对权重值施加上限来避免梯度消失的技术。

【中文翻译】

6、当你增加正则化的超参数 lambda 时会发生什么?

(1)权重变小 (接近 0)

(2)重量变得更大 (远离 0)

(3)加倍的λ导致加倍的权重

(4)每次迭代,梯度下降采取更大的步距 (与 lambda 成正比)

【解释】

-------------------------------------------------------------------------------------------------------------------------

答案仅供参考

时间: 2024-11-05 11:35:08

课程二(Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization),第一周(Practical aspects of Deep Learning) —— 1.Practice Questions : Practical aspects of deep learning的相关文章

[C4] Andrew Ng - Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization

About this Course This course will teach you the "magic" of getting deep learning to work well. Rather than the deep learning process being a black box, you will understand what drives performance, and be able to more systematically get good res

课程二(Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization),第三周(Hyperparameter tuning, Batch Normalization and Programming Frameworks) —— 0.Learning Goals

学习目标 Master the process of hyperparameter tuning

Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization - week1

Normalizing input Vanishing/Exploding gradients deep neural network suffer from these issues. they are huge barrier to training deep neural network. There is a partial solution to solve the above problem but help a lot which is careful choice how you

(转)Understanding, generalisation, and transfer learning in deep neural networks

Understanding, generalisation, and transfer learning in deep neural networks FEBRUARY 27, 2017 This is the first in a series of posts looking at the 'top 100 awesome deep learning papers.' Deviating from the normal one-paper-per-day format, I'll take

为什么深度神经网络难以训练Why are deep neural networks hard to train?

Imagine you're an engineer who has been asked to design a computer from scratch. One day you're working away in your office, designing logical circuits, setting out AND gates, OR gates, and so on, when your boss walks in with bad news. The customer h

On Explainability of Deep Neural Networks

On Explainability of Deep Neural Networks « Learning F# Functional Data Structures and Algorithms is Out! On Explainability of Deep Neural Networks During a discussion yesterday with software architect extraordinaire David Lazar regarding how everyth

Training Deep Neural Networks

http://handong1587.github.io/deep_learning/2015/10/09/training-dnn.html  //转载于 Training Deep Neural Networks Published: 09 Oct 2015  Category: deep_learning Tutorials Popular Training Approaches of DNNs?—?A Quick Overview https://medium.com/@asjad/po

[译]深度神经网络的多任务学习概览(An Overview of Multi-task Learning in Deep Neural Networks)

译自:http://sebastianruder.com/multi-task/ 1. 前言 在机器学习中,我们通常关心优化某一特定指标,不管这个指标是一个标准值,还是企业KPI.为了达到这个目标,我们训练单一模型或多个模型集合来完成指定得任务.然后,我们通过精细调参,来改进模型直至性能不再提升.尽管这样做可以针对一个任务得到一个可接受得性能,但是我们可能忽略了一些信息,这些信息有助于在我们关心的指标上做得更好.具体来说,这些信息就是相关任务的监督数据.通过在相关任务间共享表示信息,我们的模型在

Mastering the game of Go with deep neural networks and tree search

Silver, David, et al. "Mastering the game of Go with deep neural networks and tree search." Nature 529.7587 (2016): 484-489. Alphago的论文,主要使用了RL的技术,不知道之前有没有用RL做围棋的. 提出了两个网络,一个是策略网络,一个是价值网络,均是通过自我对战实现. 策略网络: 策略网络就是给定当前棋盘和历史信息,给出下一步每个位置的概率.以前的人似乎是用