Optimization algorithms used in ML
Published:
A pedagogical implementation of the following optimization algorithms used to trained ML models:
- Gradient descent
- Steepest descent
- Gradient descent with momentum
- Nesterov Accelerated Gradient Descent
- AdaGrad
- AdaDelta
- RMSProp
- Adam
- Newton’s method
- Constrained Newton’s method
- Barrier method (interior-point methods)