Chapter 2
Chapter 2: Machine Learning Models
Enthusiasm and Frustration: Linear Regression

Exhausted and buried in trial-and-error, Ethan leans on Noah and AIA while Owen doubts the effort, and linear regression becomes Minermontβs first measurable win.
This chapter focuses on the core mechanics behind supervised learning with simple models: fitting a line, choosing a loss, and optimising parameters.
What Will You Learn?
Use the interactive activities below to build intuition by manipulating parameters and watching the error change.
2.1 Fit a Linear Regression Model
Manually adjust slope and intercept, then compare predictions vs. data.2.2 Visualize Gradient Descent
See how iterative updates move parameters toward lower error.2.5 Comparing Cost Functions
Compare MAE vs. MSE and how each treats outliers.
Mathematical Foundations
- π Taking Derivatives: A concise reference of common derivative rules and an interactive widget to compute $f'(x)$ and explore step-by-step explanations via WolframAlpha.
- π Partial Derivatives and Gradients: From partial derivatives to the gradient vector, geometric intuition, and a widget to compute $\nabla f$ symbolically.
Algorithm Pseudocode
- π Linear Regression Pseudocode: Detailed pseudocode for linear regression with normal equation and gradient descent approaches.
- π Gradient Descent Pseudocode: Step-by-step pseudocode for batch, stochastic, and mini-batch gradient descent algorithms.
Practical Implementation
- β‘ Linear Regression from Scratch: Build a linear regression model from scratch, applying the concepts learned to a real problem.
- β‘ Linear Regression with Scikit-Learn: Use the industry-standard library to implement a linear regression model efficiently.
Bibliography and Additional Resources
- π Linear Regression Resources: Deepen your understanding from theory to practice.
- π Gradient Descent Resources: From fundamentals to advanced optimization techniques.