Questions

Let's recap on gradient descent by answering the following questions:

  1. How does SGD differ from vanilla gradient descent?
  2. Explain mini-batch gradient descent.
  3. Why do we need momentum?
  4. What is the motivation behind NAG?
  5. How does Adagrad set the learning rate adaptively?
  6. What is the update rule of Adadelta?
  7. How does RMSProp overcome the limitations of Adagrad?
  8. Define the update equation of Adam.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset