CS 189 at UC Berkeley

# Introduction to Machine Learning

Lectures: T/Th 3:30-5 p.m., 155 Dwinelle

### Week 0 Overview

## Least Squares Framework

### Week 1 Overview

## Features, Regularization, Hyperparameters and Cross-Validation

- Note 1 : Introduction (Draft)
- Discussion 01 (solution)
- Homework 0 (TeX) (solution)
- Homework 01 (TeX) (data) (solution)

### Week 2 Overview

## MLE, MAP, OLS, Bias-Variance Tradeoffs

### Week 3 Overview

## Weighted LS, Total LS, Eigenmethods

### Week 4 Overview

## TLS, PCA

- Note 6 : Multivariate Gaussians
- Note 7 : MLE and MAP for Regression (Part II)
- Note 8 : Kernels, Kernel Ridge Regression
- Note 9 : Total Least Squares
- Note 10 : Principal Component Analysis (PCA) (Draft)
- Discussion 04 (solution)
- Homework 03 (TeX) (data) (solution) (solution code)
- Homework 04 (TeX) (data) (solution)
- Homework 05 (TeX) (data) (solution)

### Week 5 Overview

## CCA, Nonlinear LS, Gradient Descent

### Week 6 Overview

## Neural Nets, Stochastic Gradient Descent

### Week 7 Overview

## Regression for Classification: Generative v. Discriminative

- Note 13 : Optimization (Draft)
- Note 14 : Neural Networks (Draft)
- Note 15 : Training Neural Networks (Draft)
- Discussion 07 (solution)
- Homework 07 (TeX) (data) (solution) (solution code)

### Week 8 Overview

## Loss Functions, Hinge-Loss, SVM

- Note 16 : Discriminative vs. Generative Classification, LS-SVM
- Note 17 : Logistic Regression
- Note 18 : Gaussian Discriminant Analysis
- Note 19 : Expectation-Maximization (EM) Algorithm, k-means Clustering
- Discussion 07 (solution)
- Discussion 08 (solution)
- Homework 08 (TeX) (data) (solution) (solution code)

### Week 9 Overview

## k-Means, EM

### Week 10 Overview

## Spring Break

### Week 11 Overview

## Decision Trees, Boosting, Ensemble Methods

### Week 12 Overview

## Convolutional Neural Nets, Regularization Revisited

### Week 13 Overview

## Unsupervised Learning: Nearest Neighbors

- Note 26 : Boosting (Draft)
- Discussion 11 (solution)
- Discussion 13 (solution)
- Homework 12 (TeX) (data) (solution) (solution code)

### Week 14 Overview

## Sparsity and Decision Trees

- Note 27 : Convolutional Neural Networks (CNN)
- Discussion 13 (solution)
- Discussion 14
- Homework 12 (TeX) (data) (solution) (solution code)
- Homework 13 (TeX) (data) (solution) (solution code)
- Homework 14 (TeX) (data) (solution)

## Notes

See Syllabus for more information. You can find a list of week-by-week topics.

- Note 1: Introduction (Draft)
- Note 2: Linear Regression
- Note 3: Features, Hyperparameters, Validation
- Note 4: MLE and MAP for Regression (Part I)
- Note 5: Bias-Variance Tradeoff
- Note 6: Multivariate Gaussians
- Note 7: MLE and MAP for Regression (Part II)
- Note 8: Kernels, Kernel Ridge Regression
- Note 9: Total Least Squares
- Note 10: Principal Component Analysis (PCA) (Draft)
- Note 11: Canonical Correlation Analysis (CCA)
- Note 12: Nonlinear Least Squares
- Note 13: Optimization (Draft)
- Note 14: Neural Networks (Draft)
- Note 15: Training Neural Networks (Draft)
- Note 16: Discriminative vs. Generative Classification, LS-SVM
- Note 17: Logistic Regression
- Note 18: Gaussian Discriminant Analysis
- Note 19: Expectation-Maximization (EM) Algorithm, k-means Clustering
- Note 20: Support Vector Machines (SVM)
- Note 21: Generalization and Stability (Draft)
- Note 22: Duality
- Note 23: Nearest Neighbor Classification (Draft)
- Note 24: Sparsity
- Note 25: Decision Trees and Random Forests (Draft)
- Note 26: Boosting (Draft)
- Note 27: Convolutional Neural Networks (CNN)

Expand

## Discussions

The discussion sections may cover new material and will give you additional practice solving problems. You can attend any discussion section you like. See Syllabus for more information.

- Discussion 0: Vector Calculus, Linear Algebra (solution)
- Discussion 01: Derivatives Review, Least Squares (solution)
- Discussion 02: Ridge Regression (solution)
- Discussion 03: Bias-Variance Tradeoff (solution)
- Discussion 04: Kernel and Multivariate Gaussians (solution)
- Discussion 05: Dimensionality reduction (solution)
- Discussion 06: Midterm Review (solution)
- Discussion 07: Backpropagation (solution)
- Discussion 08: GD/SGD (solution)
- Discussion 09: QDA and Logistic Regression (solution)
- Discussion 10: Expectation Maximization (solution)
- Discussion 11: SVMs/Nearest Neighbors (solution)
- Discussion 12: Orthogonal Matching Pursuit (solution)
- Discussion 13: Convolutional Neural Networks (solution)
- Discussion 14: Clustering

Expand

## Homeworks

All homeworks are partially graded and it is highly-recommended that you do them. Your lowest homework score will be dropped, but this drop should be reserved for emergencies. Here is the semester's self-grade form (See form for instructions). See Syllabus for more information.

- Homework 0: Review and Linear Regression (TeX) (solution)
- Homework 01: Least Squares (TeX) (data) (solution)
- Homework 02: Ridge Regression (TeX) (data) (solution)
- Homework 03: Probabilistic Models (TeX) (data) (solution) (solution code)
- Homework 04: Kernel methods (TeX) (data) (solution)
- Homework 05: Dimensionality reduction (TeX) (data) (solution)
- Homework 06: CCA and Midterm Redo (TeX) (data) (solution)
- Homework 07: Backpropagation (TeX) (data) (solution) (solution code)
- Homework 08: SGD and Classification (TeX) (data) (solution) (solution code)
- Homework 09: LDA, CCA (TeX) (data) (solution) (solution code)
- Homework 10: K Means and EM (TeX) (data) (solution) (solution code)
- Homework 11: SVMs and Neighbors (TeX) (data) (solution)
- Homework 12: Sparsity and Decision Trees (TeX) (data) (solution) (solution code)
- Homework 13: Boosting, Convolutional Neural Networks (TeX) (data) (solution) (solution code)
- Homework 14: K-SVD and Dropout (TeX) (data) (solution)

Expand