Nesterov Momentum Equivalence Derivation
That Missing Piece in the Papers and the Slides
2020-02-08 • 5 min read
During the Winter Quarter of 2020 at UCLA, I am/was taking a class on neural network and deep learning. The course is numbered ECE C147/247. It was a course modified from the famous Stanford CS 231n.
Note: I assume you (the reader) already knows what is momentum update, and how Nesterov Momentum is different from momentum.
In the chapter about optimization on training (the gradient descent stage), we learned about a technique called Nesterov Momentum used to update the weight to improve training speed.
The formulation of Nesterov Momentum update is given as
Updating momentum:
Updating parameter:
Here is the loss function. However, this formulation is difficult to implement in code. In code, we usually have available to us instead of .
To solve this problem, an alternative formulation is given. We first let
Then, the formulation becomes
In class, there were no explanation given on how the 2 formulations are equivalent. Instead, a hint is given to us that the proof can be done through a substitution of variable. This formulation still confuses me a lot.
There are multiple points of confusion here:
- How do we calculate
? 2. What is the actual meaning of ? 3. What are we storing as our parameter?
I then sought help from the Stanford course webpage on this chapter. They pointed to two papers on the derivation of Nesterov momentum. Only one of them (session 3.5) discusses the alternative derivation, but it simply stated the 2 formulation without proof. I understand the target audience of the paper are very likely capable to figure out the equivalence but I am not one of them.
The paper did offer some insight:
the parameters are a completely equivalent replacement of .
This offers the answer to confusion 3. We are storing as the parameter. In code, what is given to us the the derivative of with respect to . Using chain rule, we can show that
This answer confusion 1, since in code we are given gradient of loss with with respect to and it is equivalent to gradient with respect to . Now back to confusion 2, what is the meaning of ? Is it okay to use it as the parameter for prediction and testing? The answer is yes. By definition, is simply the parameter after one momentum update. Therefore, we can think of it as a parameter value that is closer to the optimum.
I had to take the time to do the derivation on my own. Here, I offer the derivation of the equivalence. First, we have the substitution and the original formulation.
We substitute into and apply the relationship :
We can rewrite as . Then, we substitute into . Note that we have to substitute both and .
Now we have arrived at the alternative formulation with and . Therefore, in the code, the new parameter is that is returned instead of . Again, the idea is that the actual parameter stored and used for evaluation is always updated with the momentum. Therefore, the gradient is taken with loss is the parameter updated with the momentum.