Topic 2: Optimization Part-1
This chapter describes computational graph, basic training of neural networks and backpropagation.
-
Chapter 02.01: Loss Functions for Regression
Empirical risk minimization, gradient descent, and stochastic gradient descent are three subtopic which we explain in this chapter.
-
Chapter 02.02: Chain Rule and Computational Graphs
In this section, we explain the chain rule of calculus, and it’s computational graphs.
-
Chapter 02.03: Basic Backpropagation 1
This section introduces forward and backward passes, chain rule, and the details of backprop in deep learning.
-
Chapter 02.04: Basic Backpropagation 2
We continue our discussion about backpropagation in formalism and recursion.
-
Chapter 02.05: Hardware and Software
This section introduces GPU training for accelerated learning of neural networks, software for hardware support, and deep learning software platforms.