Introduction to Linear Regression
The basics of prediction with a very simple model: a line.
Machine learning swoops in where humans fail — such as when there are hundreds (or hundreds of thousands) variables to keep track of and millions (or billions, or trillions) of pieces of data to process.
This course develops the mathematical basis needed to deeply understand how problems of classification and estimation work. By the end of this course, you’ll develop the techniques needed to analyze data and apply these techniques to real-world problems.
Get your basics in line.
The basics of prediction with a very simple model: a line.
Dive into the math behind linear regression.
Brush up on linear algebra, a key tool throughout machine learning.
What happens when you need to do a regression with more than two variables? Hyperplanes!
When variables are related non-linearly, linear regression falls short.
Get familiar with ridge regression, lasso, nearest neighbors, and other approaches.
Classifying both quantitative and qualitative data.
Add this clever relationship representation to your tool kit.
Instead of giving a definitive 'yes' or 'no', this method predicts probabilities of 'yes' or 'no'.
Explore this powerful tool for separating classes of normally distributed data.
"My neighbors are my friends", as a classification algorithm.
The judge and jury for classification.
Bayes' theorem - a classic tool of probability - guides this classication method.
Explore this versatile model and related ideas like bagging, random forests, and boosting.
A versatile tool, best applied when there are strong distinctions between cases.
The basics of classification via a tree.
A major advantage of trees is their interpretability. What are the drawbacks?
Reduce the model variance by averaging across many trees!
"Teammates who complement each other's weaknesses", trees edition.
Divide classes with the best possible margin of error.
The wall of SVMs: you're either in or you're out.
Explore this SVM that works even when some points end up on the "wrong side of the wall".
Sometimes, the best wall isn't a straight line.
Learn how to combine several classifiers to handle data sets with many classes.
SVMs are similar to logistic regression - but not exactly the same! Find out why.
It's time to upgrade the dot product.
Get down the basics of this tool which helps measure the similarity of vectors.
Use kernels to classify new data by comparing it to existing data.
See why SVMs are one of the best models for employing kernels.
Explore the power of the kernel trick, and the drawbacks and pitfalls of using kernels.