Chapter 1: ML Basics
This chapter introduces the basic concepts of Machine Learning. We focus on supervised learning, explain the difference between regression and classification, show how to evaluate and compare Machine Learning models and formalize the concept of learning.
-
Chapter 01.00: ML Basics: In a Nutshell
In this nutshell chunk, we dive into the foundational principles of Machine Learning.
-
Chapter 01.01: What is ML?
As subtopic of artificial intelligence, machine learning is a mathematically well-defined discipline and usually constructs predictive or decision models from data, instead of explicitly programming them. In this section, you will see some typical examples of where machine learning is applied and the main directions of machine learning.
-
Chapter 01.02: Data
In this section we explain the basic structure of tabular data used in machine learning. We will differentiate targets from features, talk about labeled and unlabeled data and introduce the concept of the data generating process.
-
Chapter 01.03: Tasks
The tasks of supervised learning can roughly be divided in two categories: regression (for continuous outcome) and classification (for categorical outcome). We will present some examples.
-
Chapter 01.04: Models and Parameters
We introduce models as functional hypotheses about the mapping from feature to target space that allow us to make predictions by computing a function of the input data. Frequently in machine learning, models are understood to be parameterized curves, which is illustrated by several examples.
-
Chapter 01.05: Learner
Roughly speaking, learners (endowed with a specific hyperparameter configuration) take training data and return a model.
-
Chapter 01.06: Losses and Risk Minimization
In order to find good solutions we need a concept to evaluate and compare models. To this end, the concepts of loss function, risk and empirical risk minimization are introduced.
-
Chapter 01.07: Optimization
In this section we study parameter optimization as computational solution to machine learning problems. We address pitfalls in non-convex optimization problems and introduce the fundamental concept of gradient descent.
-
Chapter 01.08: Components of a Learner
Nearly all supervised learning algorithms can be described in terms of three components: 1) hypothesis space, 2) risk, and 3) optimization. In this section, we explain how these components interact and why this is a very useful concept for many supervised learning approaches.