Topic 1: Introduction
In this topic, we give breief introductory about representation learning, single neuron, XOR Problem, single hidden layer as well as multi-layer neural networks. Moreover, we discuss the multiclass classification, matrix notation and universal approximation.
-
Chapter 01.01: Introduction
In this section, we introduces the relationship of DL and ML, give basic intro about feature learning, and discuss the use-cases and data types for DL methods.
-
Chapter 01.02: Single Neuron
In this section we explain the graphical representation of a single neuron and describe affine transformations and non-linear activation functions. Moreover, we talk about the hypothesis spaces of a single neuron and name some typical loss functions.
-
Chapter 01.03: XOR Problem
Example problem a single neuron can not solve but a single hidden layer net can!
-
Chapter 01.04: Single Hidden Layer NN
We introduce architecture of single hidden layer neural networks and discuss the advantage of hidden layers. Then, we explain the typical (non-linear) activation functions.
-
Chapter 01.05: Single Hidden Layer Networks for Multi-Class Classification
In this section, we discuss a neural network architectures for multi-class classification, softmax activation function as well as the Softmax loss.
-
Chapter 01.06: MLP: Multi-Layer Feedforward Neural Networks
Architectures of deep neural networks and deep neural networks as chained functions are the learning goal of this part.
-
Chapter 01.07: Optimization
In this section we learn about compact representation of neural network, vector notation for neuron layers, vector and matrix notation of bias and weight parameters.
-
Chapter 01.08: Universal Approximation
Universal approximation theorem for one-hidden-layer neural networks and pros and cons of a low approximation error are the learning goal of this section.
-
Chapter 01.09: Brief History
We overview history of DL development.