Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and Optimization

Mark as Favorite Share
image

What Will You Learn?

In the second Deep Learning Specialization course, uncover the inner workings of deep learning, learning systematic approaches to drive performance. By the end, you'll master key practices for training, analyzing bias/variance, and implementing standard neural network techniques in TensorFlow, including regularization, hyperparameter tuning, and optimization algorithms like gradient descent and Adam. Gain practical skills for building robust deep learning applications.

About This Course

Provider: Coursera
Format: Online
Duration: 23 hours to complete [Approx]
Target Audience: Intermediate
Learning Objectives: By the end, you will learn the best practices to train and develop test sets and analyze bias/variance for building deep learning applications
Course Prerequisites: Intermediate Python skills: basic programming, understanding of for loops, if/else statements, data structures, A basic grasp of linear algebra & ML
Assessment and Certification: Earn a Certificate upon completion from the relevant Provider
Instructor: DeepLearning.AI
Key Topics: Deep Learning, Tensorflow, hyperparameter tuning, Mathematical Optimization
Topic Covered: 
  1. - Train / Dev / Test sets
  2. - Bias / Variance
  3. - Basic Recipe for Machine Learning
  4. - Regularization
  5. - Why Regularization Reduces Overfitting?
  6. - Dropout Regularization
  7. - Understanding Dropout
  8. - Other Regularization Methods
  9. - Normalizing Inputs
  10. - Vanishing / Exploding Gradients
  11. - Weight Initialization for Deep Networks
  12. - Numerical Approximation of Gradients
  13. - Gradient Checking
  14. - Mini-batch Gradient Descent
  15. - Understanding Mini-batch Gradient Descent
  16. - Exponentially Weighted Averages
  17. - Understanding Exponentially Weighted Averages
  18. - Bias Correction in Exponentially Weighted Averages
  19. - Gradient Descent with Momentum
  20. - RMSprop
  21. - Adam Optimization Algorithm
  22. - Learning Rate Decay
  23. - The Problem of Local Optima
  24. - Tuning Process
  25. - Using an Appropriate Scale to pick Hyperparameters
  26. - Hyperparameters Tuning in Practice: Pandas vs. Caviar
  27. - Normalizing Activations in a Network
  28. - Fitting Batch Norm into a Neural Network
  29. - Why does Batch Norm work?
  30. - Batch Norm at Test Time
  31. - Softmax Regression
  32. - Training a Softmax Classifier
  33. - Deep Learning Frameworks
  34. - TensorFlow

0 Comments

No reviews yet !!

Please login first