Improving Deep Neural Networks: Hyper-parameter tuning, Regularization and Optimization


This course teachs one the “magic” of getting deep learning to work well. Rather than the deep learning process being a black box, one understands what drives performance, and is able to more systematically get good results, besides learning TensorFlow.

Students are able to:

  • Understand industry best-practices for building deep learning applications.
  • Effectively use the common neural network “tricks”, including initialization, L2 and dropout regularization, Batch normalization, gradient checking,
  • Implement and apply a variety of optimization algorithms, such as mini-batch gradient descent, Momentum, RMSprop and Adam, and check for their convergence.
  • Understand new best-practices for the deep learning era of how to set up train/dev/test sets and analyze bias/variance
  • Implement a neural network in TensorFlow.

This is the second course of the Deep Learning Specialization.

Aditya Jyoti Paul
Aditya Jyoti Paul
Computer Vision and Image Encryption Researcher

My work makes machines smarter, secure and more accessible. I’m passionate about research, teaching and blogging. Outside academia, I love travel, music, reading and meeting new people!