CS 189 at UC Berkeley

# Introduction to Machine Learning

Lectures: 2:30 - 4pm Mon-Thurs in LeConte 4

### Week 0 Overview

## Linear Regression, Features, Hyperparameters and Cross-Validation

### Week 1 Overview

## MLE, MAP, Bias-Variance, Gaussians

- Note 4 : MLE and MAP for Regression (Part I)
- Note 5 : Bias-Variance Tradeoff
- Note 6 : Multivariate Gaussians
- Note 7 : MLE and MAP for Regression (Part II)
- Discussion 01 (solution)
- Discussion 02 (solution)
- Discussion 03 (solution)
- Homework 0 (solution)
- Homework 01 (data) (solution)
- Homework 02 (data) (solution) (Code sol)

### Week 2 Overview

## Kernels, PCA, Optimization

### Week 3 Overview

## Neural Networks

### Week 4 Overview

## Classification, Logistic Regression, GDA, EM, K-Means

- Note 16 : Discriminative vs. Generative Classification, LS-SVM
- Note 17 : Logistic Regression
- Note 18 : Gaussian Discriminant Analysis
- Note 19 : Expectation-Maximization (EM) Algorithm, k-means Clustering
- Discussion 06 (solution)
- Discussion 07 (solution)
- Discussion 08
- Homework 04 (data)
- Homework 05 (data)

## Notes

See Syllabus for more information. You can find a list of week-by-week topics. Notes are not a substitute for going to lecture, as additional material may be covered in lecture. Notes are from a previous iteration of the course and may not be comprehensive. Refer to lectures.

- Note 1: Introduction
- Note 2: Linear Regression
- Note 3: Features, Hyperparameters, Validation
- Note 4: MLE and MAP for Regression (Part I)
- Note 5: Bias-Variance Tradeoff
- Note 6: Multivariate Gaussians
- Note 7: MLE and MAP for Regression (Part II)
- Note 8: Kernels, Kernel Ridge Regression
- Note 9: Total Least Squares
- Note 10: Principal Component Analysis (PCA)
- Note 11: Canonical Correlation Analysis (CCA)
- Note 12: Nonlinear Least Squares, Optimization
- Note 13: Gradient Descent Extensions
- Note 14: Neural Networks
- Note 15: Training Neural Networks
- Note 16: Discriminative vs. Generative Classification, LS-SVM
- Note 17: Logistic Regression
- Note 18: Gaussian Discriminant Analysis
- Note 19: Expectation-Maximization (EM) Algorithm, k-means Clustering
- Note 20: Support Vector Machines (SVM)
- Note 21: Generalization and Stability
- Note 22: Duality
- Note 23: Nearest Neighbor Classification
- Note 24: Sparsity
- Note 25: Decision Trees and Random Forests
- Note 26: Boosting
- Note 27: Convolutional Neural Networks (CNN)

Expand

## Discussions

The discussion sections may cover new material and will give you additional practice solving problems. You can attend any discussion section you like. See Syllabus for more information.

- Discussion 0: Vector Calculus, Linear Algebra (solution)
- Discussion 01: Derivatives Review, Least Squares (solution)
- Discussion 02: Ridge Regression (solution)
- Discussion 03: Bias-Variance Tradeoff (solution)
- Discussion 04: Kernel and Multivariate Gaussians (solution)
- Discussion 05: Dimensionality reduction (solution)
- Discussion 06: Midterm Review (solution)
- Discussion 07: Backpropagation (solution)
- Discussion 08: GD/SGD
- Discussion 09: QDA and Logistic Regression
- Discussion 10: Expectation Maximization
- Discussion 11: SVMs/Nearest Neighbors
- Discussion 12: Orthogonal Matching Pursuit
- Discussion 13: Convolutional Neural Networks
- Discussion 14: Clustering

Expand

## Homeworks

All homeworks are fully graded. Your lowest homework score will be dropped, but this drop should be reserved for emergencies. See Syllabus for more information.

- Homework 0: Review and Linear Regression (solution)
- Homework 01: Least Squares (data) (solution)
- Homework 02: Ridge Regression (data) (solution) (Code sol)
- Homework 03: PCA and Regression (data) (solution)
- Homework 04: Kernels (data)
- Homework 05: Gradient Descent (data)
- Homework 06: TBA
- Homework 07: TBA

Expand