• Martin Thoma
  • Home
  • Categories
  • Tags
  • Archives
  • Support me

Gradient Descent, the Delta Rule and Backpropagation

Contents

  • Gradient Descent, the Delta Rule and Backpropagation
    • See also

If you learn about machine learning you will stumble over three terms that are related:

  • Gradient descent,
  • the Delta rule and
  • backpropagation

Gradient descent is a way to find a minimum in a high-dimensional space. You go in direction of the steepest descent.

The delta rule is an update rule for single layer perceptrons. It makes use of gradient descent.

Backpropagation is an efficient implementation of gradient descent, where a rule can be formulated which has some recursively defined parts. Those parts belong to neurons of different layers and get calculated from the output-layer (last layer) to the first hidden layer.

See also

Wikipedia pages:

  • Gradient descent
  • Delta rule
  • Backpropagation

Published

Okt 26, 2014
by Martin Thoma

Category

Machine Learning

Tags

  • AI 13
  • Machine Learning 81

Contact

  • Martin Thoma - A blog about Code, the Web and Cyberculture
  • E-mail subscription
  • RSS-Feed
  • Privacy/Datenschutzerklärung
  • Impressum
  • Powered by Pelican. Theme: Elegant by Talha Mansoor