Artificial Neural Networks

Computational Models of The Neuron

A neuron has many inputs but only one output, so it must "integrate" its inputs into one output (a single number). Recall that the inputs to a neuron are generally outputs from other neurons. What is the most natural way to represent the set of these inputs to a single neuron in an ANN?

                 

Computational Models of The Neuron

In our computational model of a neuron, the inputs defined by the vector x\vec{x} are “integrated” by taking the bias bb plus the dot product of the inputs x\vec{x} and weights w.\vec{w}. wx+b\vec{w} \cdot \vec{x} + b The dot product represents a "weighted sum" because it multiplies each input by a weight.

A biological interpretation is that the inputs defining x\vec{x} are the outputs of other neurons, the weights defining w\vec{w} are the strengths of the connections to those neurons, and the bias bb impacts the threshold the computing neuron must surpass in order to fire.

                 

Computational Models of The Neuron

Given the inputs, weights, and bias shown above, what is the integration of these inputs, given by the weighted sum wx+b?\vec{w} \cdot \vec{x} + b?

Note: If you are unfamiliar with dot products, our wiki on the dot product in Cartesian coordinates might be helpful.

                 

Computational Models of The Neuron

An activation function, H(v),H(v), is used to transform the integration (weighted sum) into a single output which determines whether or not the neuron would fire. For example, we might have H(v)H(v) as the Heaviside step function; that is, H(v)={1if v0,0if v<0.H(v) = \begin{cases} 1 & \mbox{if } v \ge 0, \\ 0 & \mbox{if } v \lt 0. \\ \end{cases} Considering H(wx+b),H(\vec{w} \cdot \vec{x} + b), how does increasing the bias bb affect the likelihood of the neuron firing (all else equal), assuming that a 1 corresponds to firing?

                 

Computational Models of The Neuron

When H(v)H(v) is the Heaviside step function, the neuron modeled by H(wx+b)H(\vec{w} \cdot \vec{x} + b) fires when wx+b0.\vec{w} \cdot \vec{x} + b\ge 0.

The hypersurface wx+b=0\vec{w} \cdot \vec{x} + b = 0 is called the decision boundary, since it divides the input vector space into two parts based on whether the input would cause the neuron to fire. This model is known as a linear classifier because this boundary is based on a linear combination of the inputs.

                 

Computational Models of The Neuron

The model above shows a decision boundary for predicting college admission based on the input x=(SAT scoreGPA)\vec{x} = \begin{pmatrix}\text{SAT score} \\ \text{GPA} \end{pmatrix} and the activation function H(wx+b)H(\vec{w} \cdot \vec{x} + b), where H(v)H(v) is the Heaviside step function. Which of the following is a possible value for the weight vector, w?\vec{w}?

                 

Computational Models of The Neuron

So far, we’ve considered an activation function H(v)H(v) with binary outputs, as inspired by a physical neuron. However, in ANNs, we don’t need to restrict ourselves to a binary function. Functions like the ones below avoid counterintuitive jumps and can model continuous values (e.g., a probability).

The power of ANNs is illustrated by the universal approximation theorem, which states that ANNs using activation functions like these can model any continuous function, given some general requirements about the size and layout of the ANN.

We can't prove the universal approximation theorem here, but its implications are still important. No matter how complicated a situation is, a sufficiently large ANN with the appropriate parameters can model it.

                 

Computational Models of The Neuron

Consider the activation function H(v)=11+evH(v) = \dfrac{1}{1+e^{-v}}, where ee stands in for Euler's Number, 2.718282.71828\ldots

H(v)H(v) is known as the sigmoid function. In our image above, we multiply our inputs by their corresponding weights and add a bias of 22 to get vv. Then the value in vv is fed into the activation function to get the output of the neuron.

Given the inputs, weights, and bias shown in the image above (which are the same as in an earlier question), what is the approximate output (to the nearest thousandth) from this neuron after the integrated value of the inputs is evaluated by the activation function?

                 

Computational Models of The Neuron

We’ve now built up a basic computational model of neurons. While one neuron might not seem powerful, connecting many together in a clever manner can yield a highly effective learning model. This turns out to be true for ANNs, as evidenced by the universal approximation theorem.

The remainder of this course focuses on the methods used to construct and train ANNs, highlighting the intuition behind the models and their applications. Let’s dive in!

                 
×

Problem Loading...

Note Loading...

Set Loading...