Calculus on Computational Graphs: Backpropagation

Calculus on Computational Graphs: Backpropagation

Introduction

Backpropagation is the key algorithm that makes training deep models computationally tractable. For modern neural networks, it can make training with gradient descent as much as ten million times faster, relative to a naive implementation. That’s the difference between a model taking a week to train and taking 200,000 years.

Beyond its use in deep learning, backpropagation is a powerful computational tool in many other areas, ranging from weather forecasting to analyzing numerical stability – it just goes by different names. In fact, the algorithm has been reinvented at least dozens of times in different fields (seeGriewank (2010)). The general, application independent, name is “reverse-mode differentiation.”

Fundamentally, it’s a technique for calculating derivatives quickly. And it’s an essential trick to have in your bag, not only in deep learning, but in a wide variety of numerical computing situations.

Computational Graphs

Computational graphs are a nice way to think about mathematical expressions. For example, consider the expression e=(a+b)∗(b+1). There are three operations: two additions and one multiplication. To help us talk about this, let’s introduce two intermediary variables, c and d so that every function’s output has a variable. We now have:

c=a+b

d=b+1

e=c∗d

To create a computational graph, we make each of these operations, along with the input variables, into nodes. When one node’s value is the input to another node, an arrow goes from one to another.

These sorts of graphs come up all the time in computer science, especially in talking about functional programs. They are very closely related to the notions of dependency graphs and call graphs. They’re also the core abstraction behind the popular deep learning framework Theano.

We can evaluate the expression by setting the input variables to certain values and computing nodes up through the graph. For example, let’s set a=2 and b=1:

The expression evaluates to 6.

Derivatives on Computational Graphs

If one wants to understand derivatives in a computational graph, the key is to understand derivatives on the edges. If a directly affects c, then we want to know how it affects c. If achanges a little bit, how does c change? We call this the partial derivative of c with respect to a.

To evaluate the partial derivatives in this graph, we need the sum rule and the product rule:

∂∂a(a+b)=∂a∂a+∂b∂a=1

∂∂uuv=u∂v∂u+v∂u∂u=v

Below, the graph has the derivative on each edge labeled.

What if we want to understand how nodes that aren’t directly connected affect each other? Let’s consider how e is affected by a. If we change a at a speed of 1, c also changes at a speed of 1. In turn, c changing at a speed of 1 causes e to change at a speed of 2. So e changes at a rate of 1∗2with respect to a.

The general rule is to sum over all possible paths from one node to the other, multiplying the derivatives on each edge of the path together. For example, to get the derivative of e with respect to b we get:

∂e∂b=1∗2+1∗3

This accounts for how b affects e through c and also how it affects it through d.

This general “sum over paths” rule is just a different way of thinking about the multivariate chain rule.

Factoring Paths

The problem with just “summing over the paths” is that it’s very easy to get a combinatorial explosion in the number of possible paths.

In the above diagram, there are three paths from X to Y, and a further three paths from Y to Z. If we want to get the derivative ∂Z∂X by summing over all paths, we need to sum over 3∗3=9paths:

∂Z∂X=αδ+α?+αζ+βδ+β?+βζ+γδ+γ?+γζ

The above only has nine paths, but it would be easy to have the number of paths to grow exponentially as the graph becomes more complicated.

Instead of just naively summing over the paths, it would be much better to factor them:

∂Z∂X=(α+β+γ)(δ+?+ζ)

This is where “forward-mode differentiation” and “reverse-mode differentiation” come in. They’re algorithms for efficiently computing the sum by factoring the paths. Instead of summing over all of the paths explicitly, they compute the same sum more efficiently by merging paths back together at every node. In fact, both algorithms touch each edge exactly once!

Forward-mode differentiation starts at an input to the graph and moves towards the end. At every node, it sums all the paths feeding in. Each of those paths represents one way in which the input affects that node. By adding them up, we get the total way in which the node is affected by the input, it’s derivative.

Though you probably didn’t think of it in terms of graphs, forward-mode differentiation is very similar to what you implicitly learned to do if you took an introduction to calculus class.

Reverse-mode differentiation, on the other hand, starts at an output of the graph and moves towards the beginning. At each node, it merges all paths which originated at that node.

Forward-mode differentiation tracks how one input affects every node. Reverse-mode differentiation tracks how every node affects one output. That is, forward-mode differentiation applies the operator ∂∂X to every node, while reverse mode differentiation applies the operator ∂Z∂to every node.1

Computational Victories

At this point, you might wonder why anyone would care about reverse-mode differentiation. It looks like a strange way of doing the same thing as the forward-mode. Is there some advantage?

Let’s consider our original example again:

We can use forward-mode differentiation from b up. This gives us the derivative of every node with respect to b.

We’ve computed ∂e∂b, the derivative of our output with respect to one of our inputs.

What if we do reverse-mode differentiation from e down? This gives us the derivative of e with respect to every node:

When I say that reverse-mode differentiation gives us the derivative of e with respect to every node, I really do mean every node. We get both ∂e∂a and ∂e∂b, the derivatives of e with respect to both inputs. Forward-mode differentiation gave us the derivative of our output with respect to a single input, but reverse-mode differentiation gives us all of them.

For this graph, that’s only a factor of two speed up, but imagine a function with a million inputs and one output. Forward-mode differentiation would require us to go through the graph a million times to get the derivatives. Reverse-mode differentiation can get them all in one fell swoop! A speed up of a factor of a million is pretty nice!

When training neural networks, we think of the cost (a value describing how bad a neural network performs) as a function of the parameters (numbers describing how the network behaves). We want to calculate the derivatives of the cost with respect to all the parameters, for use in gradient descent. Now, there’s often millions, or even tens of millions of parameters in a neural network. So, reverse-mode differentiation, called backpropagation in the context of neural networks, gives us a massive speed up!

(Are there any cases where forward-mode differentiation makes more sense? Yes, there are! Where the reverse-mode gives the derivatives of one output with respect to all inputs, the forward-mode gives us the derivatives of all outputs with respect to one input. If one has a function with lots of outputs, forward-mode differentiation can be much, much, much faster.)

Isn’t This Trivial?

When I first understood what backpropagation was, my reaction was: “Oh, that’s just the chain rule! How did it take us so long to figure out?” I’m not the only one who’s had that reaction. It’s true that if you ask “is there a smart way to calculate derivatives in feedforward neural networks?” the answer isn’t that difficult.

But I think it was much more difficult than it might seem. You see, at the time backpropagation was invented, people weren’t very focused on the feedforward neural networks that we study. It also wasn’t obvious that derivatives were the right way to train them. Those are only obvious once you realize you can quickly calculate derivatives. There was a circular dependency.

Worse, it would be very easy to write off any piece of the circular dependency as impossible on casual thought. Training neural networks with derivatives? Surely you’d just get stuck in local minima. And obviously it would be expensive to compute all those derivatives. It’s only because we know this approach works that we don’t immediately start listing reasons it’s likely not to.

That’s the benefit of hindsight. Once you’ve framed the question, the hardest work is already done.

Conclusion

Derivatives are cheaper than you think. That’s the main lesson to take away from this post. In fact, they’re unintuitively cheap, and us silly humans have had to repeatedly rediscover this fact. That’s an important thing to understand in deep learning. It’s also a really useful thing to know in other fields, and only more so if it isn’t common knowledge.

Are there other lessons? I think there are.

Backpropagation is also a useful lens for understanding how derivatives flow through a model. This can be extremely helpful in reasoning about why some models are difficult to optimize. The classic example of this is the problem of vanishing gradients in recurrent neural networks.

Finally, I claim there is a broad algorithmic lesson to take away from these techniques. Backpropagation and forward-mode differentiation use a powerful pair of tricks (linearization and dynamic programming) to compute derivatives more efficiently than one might think possible. If you really understand these techniques, you can use them to efficiently calculate several other interesting expressions involving derivatives. We’ll explore this in a later blog post.

This post gives a very abstract treatment of backpropagation. I strongly recommend reading Michael Nielsen’s chapter on it for an excellent discussion, more concretely focused on neural networks.

Acknowledgments

Thank you to Greg CorradoJon ShlensSamy Bengio and Anelia Angelova for taking the time to proofread this post.

Thanks also to Dario AmodeiMichael Nielsen and Yoshua Bengio for discussion of approaches to explaining backpropagation. Also thanks to all those who tolerated me practicing explaining backpropagation in talks and seminar series!


  1. This might feel a bit like dynamic programming. That’s because it is!?
时间: 2024-11-24 15:44:03

Calculus on Computational Graphs: Backpropagation的相关文章

转:面向视觉识别的卷积神经网络课程 & CNN的近期进展与实用技巧

http://mp.weixin.qq.com/s?__biz=MjM5ODkzMzMwMQ==&mid=2650408190&idx=1&sn=f22adfb13fb14f8a220222355659913f 1. 如何了解NLP的现状: 看最新的博士论文的一些Tips 了解一个领域的现状,看最新的博士论文也许是个捷径.譬如有童鞋问如何了解NLP的state-of-the-art, 其实就斯坦福,伯克利, CMU, JHU等学校的近期博士论文选读一些,领域主流方向的概况就能了解一

人人都可以做深度学习应用:入门篇

一.人工智能和新科技革命 2017年围棋界发生了一件比较重要事,Master(Alphago)以60连胜横扫天下,击败各路世界冠军,人工智能以气势如虹的姿态出现在我们人类的面前.围棋曾经一度被称为"人类智慧的堡垒",如今,这座堡垒也随之成为过去.从2016年三月份AlphaGo击败李世石开始,AI全面进入我们大众的视野,对于它的讨论变得更为火热起来,整个业界普遍认为,它很可能带来下一次科技革命,并且,在未来可预见的10多年里,深刻得改变我们的生活. 其实,AI除了可以做我们熟知的人脸.

人人都能够做深度学习应用:入门篇

一.人工智能和新科技革命 2017年围棋界发生了一件比較重要事,Master(Alphago)以60连胜横扫天下,击败各路世界冠军.人工智能以气势如虹的姿态出现在我们人类的面前.围棋以前一度被称为"人类智慧的堡垒",现在.这座堡垒也随之成为过去.从2016年三月份AlphaGo击败李世石開始,AI全面进入我们大众的视野,对于它的讨论变得更为火热起来.整个业界普遍觉得,它非常可能带来下一次科技革命,而且,在未来可预见的10多年里,深刻得改变我们的生活. 事实上.AI除了能够做我们熟知的人

Pytorch 之 backward

首先看这个自动求导的参数: grad_variables:形状与variable一致,对于y.backward(),grad_variables相当于链式法则dz/dx=dz/dy × dy/dx 中的 dz/dy.grad_variables也可以是tensor或序列. retain_graph:反向传播需要缓存一些中间结果,反向传播之后,这些缓存就被清空,可通过指定这个参数不清空缓存,用来多次反向传播. create_graph:对反向传播过程再次构建计算图,可通过backward of b

深度学习基本知识

概念与理解 来源:https://zhuanlan.zhihu.com/p/22888385 一.基本变换:层.层在做什么? 神经网络由层来构建.每一层的工作内容: (动态图5种空间操作) 每层神经网络的数学理解:用线性变换跟随着非线性变化,将输入空间投向另一个空间. 每层神经网络的物理理解:通过现有的不同物质的组合形成新物质. 二.理解视角:层的行为如何完成识别任务? 数学视角:"线性可分" 这里有非常棒的可视化空间变换demo,一定要打开尝试并感受这种扭曲过程.更多内容请看Neur

CS231n 2017 学习笔记04——反向传播与神经网络 Backpropagation and Neural Networks

本博客内容来自 Stanford University CS231N 2017 Lecture 4 - Backpropagation and Neural Networks 课程官网:http://cs231n.stanford.edu/syllabus.html 从课程官网可以查询到更详细的信息,查看视频需要FQ上YouTube,如果不能FQ或觉得比较麻烦,也可以从我给出的百度云链接中下载. 课程视频&讲义下载:http://pan.baidu.com/s/1gfu51KJ 计算图 Comp

深度学习框架之TensorFlow的概念及安装(ubuntu下基于pip的安装,IDE为Pycharm)

2015年11月9日,Google发布人工智能系统TensorFlow并宣布开源. 1.TensorFlow的概念 TensorFlow 是使用数据流图进行数值计算的开源软件库.也就是说,TensorFlow 使用图(graph)来表示计算任务.图中的节点表示数学运算,边表示运算之间用来交流的多维数组(也就是tensor,张量).TensorFlow 灵活的架构使得你可以将计算过程部署到一个或多个CPU或GPU上. TensorFlow 最初是由 Google Brain Team 的研究人员和

The Brain vs Deep Learning Part I: Computational Complexity — Or Why the Singularity Is Nowhere Near

The Brain vs Deep Learning Part I: Computational Complexity — Or Why the Singularity Is Nowhere Near July 27, 2015July 27, 2015 Tim Dettmers Deep Learning, NeuroscienceDeep Learning, dendritic spikes, high performance computing, neuroscience, singula

Back-propagation, an introduction

About Contact Subscribe Back-propagation, an introduction Sanjeev Arora and Tengyu Ma  ?  Dec 20, 2016  ?  20 minute read Given the sheer number of backpropagation tutorials on the internet, is there really need for another? One of us (Sanjeev) recen