Improving Deep Neural Networks: Hyperparameter Tuning, Regularization and Optimization
Mark as Favorite ShareWhat Will You Learn?
In the second Deep Learning Specialization course, uncover the inner workings of deep learning, learning systematic approaches to drive performance. By the end, you'll master key practices for training, analyzing bias/variance, and implementing standard neural network techniques in TensorFlow, including regularization, hyperparameter tuning, and optimization algorithms like gradient descent and Adam. Gain practical skills for building robust deep learning applications.
About This Course
Provider: Coursera
Format: Online
Duration: 23 hours to complete [Approx]
Target Audience: Intermediate
Learning Objectives: By the end, you will learn the best practices to train and develop test sets and analyze bias/variance for building deep learning applications
Course Prerequisites: Intermediate Python skills: basic programming, understanding of for loops, if/else statements, data structures, A basic grasp of linear algebra & ML
Assessment and Certification: Earn a Certificate upon completion from the relevant Provider
Instructor: DeepLearning.AI
Key Topics: Deep Learning, Tensorflow, hyperparameter tuning, Mathematical Optimization
Topic Covered:
- - Train / Dev / Test sets
- - Bias / Variance
- - Basic Recipe for Machine Learning
- - Regularization
- - Why Regularization Reduces Overfitting?
- - Dropout Regularization
- - Understanding Dropout
- - Other Regularization Methods
- - Normalizing Inputs
- - Vanishing / Exploding Gradients
- - Weight Initialization for Deep Networks
- - Numerical Approximation of Gradients
- - Gradient Checking
- - Mini-batch Gradient Descent
- - Understanding Mini-batch Gradient Descent
- - Exponentially Weighted Averages
- - Understanding Exponentially Weighted Averages
- - Bias Correction in Exponentially Weighted Averages
- - Gradient Descent with Momentum
- - RMSprop
- - Adam Optimization Algorithm
- - Learning Rate Decay
- - The Problem of Local Optima
- - Tuning Process
- - Using an Appropriate Scale to pick Hyperparameters
- - Hyperparameters Tuning in Practice: Pandas vs. Caviar
- - Normalizing Activations in a Network
- - Fitting Batch Norm into a Neural Network
- - Why does Batch Norm work?
- - Batch Norm at Test Time
- - Softmax Regression
- - Training a Softmax Classifier
- - Deep Learning Frameworks
- - TensorFlow
0 Comments