- Hands-On Neural Networks
- Leonardo De Marchi Laura Mitchell
- 306字
- 2025-04-04 14:15:16
The perceptron
As we anticipated before, the concept of the perceptron is inspired by the biological neuron, and its main function is to decide to block or let a signal pass. Neurons receive a set of binary input, created by electrical signals. If the total signal surpasses a certain threshold, the neuron fires an output.
A perceptron does the same, as we can see in the following diagram:

It can receive multiple pieces of input, and this input is then multiplied by a set of weights. The sum of the weighted signal will then pass through an activation function—in this case, a step function. If the total signal is greater than a certain threshold, the perceptron will either let the signal pass, or not. We can represent this mathematically with the following formula:

This is the mathematical model for a neuron, represented as an explicit sum and as a matrix operation. The term WTx is the vectorized representation of the formula, where W is the weight matrix that is first transposed and is then multiplied by the vector of inputs, x.
To get a complete mathematical description, we should add a constant term, b, called the bias:

Now, we have the generic expression of a linear equation, which is the whole process that happened before the step function.
Next, the linear combination of the input and the weight, z, goes through the activation function, which will determine whether the perceptron will let the signal pass.
The most simple activation function is the step function. The output of the neuron can be approximated by a step function, which can be represented with the following equation:

This can be visualized by the following plot:

There are many types of activation functions; we will describe them later on.