Chapter 2: Supervised Regression
This chapter treats the supervised regression task in more detail. We will see different loss functions for regression, how a linear regression model can be used from a Machine Learning perspective, and how to extend it with polynomials for greater flexibility.
-
Chapter 02.00: Supervised Regression: In a Nutshell
In this nutshell chunk, we explore the fundamentals of supervised regression, where we teach machines to predict continuous outcomes based on input data.
-
Chapter 02.01: Linear Models with L2 Loss
In this section, we focus on the general concept of linear regression and explain how the linear regression model can be used from a machine learning perspective to predict a continuous numerical target variable. Furthermore, we introduce the \(L2\) loss in the context of linear regression and explain how its use results in an SSE-minimal model.
-
Chapter 02.02: Proof OLS Regression: Deep Dive
In this section, we provide you with a proof for the ordinary least squares (OLS) method.
-
Chapter 02.03: Linear Models with L1 Loss
In this section, we introduce \(L1\) loss and elaborate its differences to \(L2\) loss. In addition, we explain how the choice of loss affects optimization and robustness.
-
Chapter 02.04: Polynomial Regression Models
This section introduces polynomials to obtain more flexible models for the regression task. We explain the connection to the basic linear model and discuss the problem of overfitting.