Learning Problems for Neural Networks
Get a big-picture sense of what learning problems are all about.
This interactive course dives into the fundamentals of artificial neural networks, from the basic frameworks to more modern techniques like adversarial models.
You’ll answer questions such as how a computer can distinguish between pictures of dogs and cats, and how it can learn to play great chess.
To build an artificial learning algorithm, start with the human brain.
Get a big-picture sense of what learning problems are all about.
Learn how the human brain inspires the mechanisms within ANNs.
Build a computational model of a neuron and explore why it is so powerful.
A refresher on vectors, matrices, and optimization.
A quick overview of vectors, which are used to represent inputs and weights in an ANN.
Matrices help simplify algorithm representation, and in practice can speed up performance.
Derivatives play a key role in optimizing model parameters, such as weights and biases.
The building block of many neural networks.
Get a sense of why perceptron is a linear classifier, and explore its strengths and limitations.
Build up the learning algorithm for perceptron, and learn how to optimize it.
Dive deeper into the limitations of perceptron, and explore how to overcome some of them.
Stringing it all together.
Learn how to transform data so that it becomes linearly separable.
Behold the power of multilayer perceptrons, applied to a sportswear marketing problem.
How can you measure how complex a model is, and avoid unnecessary complexion?
Neural networks are vulnerable to overfitting. Here’s how to avoid it!
Using a model's outputs to train it to do even better.
Master this powerful tool for optimization problems, such as minimizing loss.
Learn how we can update parameters — even those that are in hidden layers!
For gradient descent, you’ll need this tool for efficiently computing the gradient of an error function.
If you’re not careful, an activation function can squash or amplify gradients.
Models to capture structural information within data.
These networks excel in image classification problems, even achieving better-than-human performance!
Explore convolutions, padding, and striding — the mathematical nuts and bolts behind feature maps.
Learn how to downsample an image, while retaining enough information to recognize rich objects.
A tour of real-world applications, from text-to-speech to artistic style transfer!
Models to process sequential data by remembering what we already know.
Explore this powerful model for data that isn’t independent, such as words in a sentence.
Learn how to train a RNN using back propagation through time.
Long short-term memories “remember” the past much better than simple RNNs.
A look into stochastic ANNs, adversarial techniques, vectorization, and other advanced topics.
Explore better models for stochastic processes, such as stock prices or the weather.
How can a neural network generate realistic looking fake humans?
This variation on GANs is sometimes even more powerful.
Learn how words can be vectorized to represent their relative relationships.
Reinforcement isn’t just for humans - it’s the training behind AlphaGo and other cutting-edge achievements.