Optimizers in Machine Learning and Deep Learning

A deep dive into the math behind popular optimizers in machine learning and deep learning

Optimization is the heart of machine learning, and mastering this crucial subject can set you apart as a top-tier data scientist or machine learning/deep learning engineer. In this comprehensive course, “Optimizers in Machine Learning and Deep Learning,” you will dive deep into the core algorithms that power the training of models, from the basics to the most advanced techniques.

What you’ll learn

  • Understand the math behind popular optimizers – Stochastic gradient descent, Momentum, NAG, Adagrad, RMSprop, Adam.
  • Gain intuition behind each of these optimizers, so you can decide the best optimizer for a given dataset.
  • Revise TensorFlow basics.
  • Master hyperparameter tuning of each of these optimizers in TensorFlow.
  • Perform optimization calculations by hand and match the results with the outputs generated by TensorFlow optimizer libraries.

Course Content

  • Introduction –> 2 lectures • 2min.
  • Stochastic Gradient Descent –> 5 lectures • 21min.
  • SGD with Momentum –> 4 lectures • 13min.
  • SGD with Nesterov Accelerated Gradient (NAG) –> 4 lectures • 13min.
  • Adagrad –> 4 lectures • 23min.
  • RMSprop –> 4 lectures • 5min.
  • Adam –> 4 lectures • 14min.
  • Gradient derivation for different loss and activation functions –> 7 lectures • 33min.

Optimizers in Machine Learning and Deep Learning

Requirements

Optimization is the heart of machine learning, and mastering this crucial subject can set you apart as a top-tier data scientist or machine learning/deep learning engineer. In this comprehensive course, “Optimizers in Machine Learning and Deep Learning,” you will dive deep into the core algorithms that power the training of models, from the basics to the most advanced techniques.

Whether you’re a beginner looking to understand the foundations or an experienced practitioner aiming to fine-tune your skills, this course offers valuable insights that will elevate your understanding and application of optimization methods. You will learn how optimizers like SGD, momentum, NAG, Adagrad, RMSprop, and Adam work behind the scenes, driving model performance and accuracy.

In addition to understanding the underlying concept behind each of these optimizers, you will get to perform manual calculations in excel to derive the gradient formulas, weight updates, loss values etc.  for different loss and activation functions and compare these results with the outputs generated by TensorFlow.

By the end of this course, you will have a solid grasp of how to choose and implement the right optimization techniques for various machine learning and deep learning tasks, giving you the confidence and expertise to tackle real-world challenges with ease.