Multilayer Perceptron, Back Propagation
Multilayer perceptrons (MLP) are the most popular type of neural networks which are being used today. They belong to a general class of structures called feedforward neural networks which is a basic type of neural network capable of approximating generic classes of functions including continuous and integrable functions. The MLP structure consists of grouping of the neurons into layers. The first and last layers are called input layers and output layers respectively since they represent input and output of the overall networks. The rest of the layers are called the hidden layers. Typically an MLP neural network consists of an input layer, one or more hidden layers and an output layer. Back propagation is the training or learning algorithm or multilayered perception. The network is first initialized by shaking off all its weight to small random values. Next the input pattern is applied and corresponding output is calculated. The calculation gives an output which is completely different from what we want because all the weights are taken randomly. We then calculate the errors which is equal to the target output which is the actual output. The error is then used to change the weight in such a way that the error will get the minimum value until the desired output is obtained.
Summary
Multilayer perceptrons (MLP) are the most popular type of neural networks which are being used today. They belong to a general class of structures called feedforward neural networks which is a basic type of neural network capable of approximating generic classes of functions including continuous and integrable functions. The MLP structure consists of grouping of the neurons into layers. The first and last layers are called input layers and output layers respectively since they represent input and output of the overall networks. The rest of the layers are called the hidden layers. Typically an MLP neural network consists of an input layer, one or more hidden layers and an output layer. Back propagation is the training or learning algorithm or multilayered perception. The network is first initialized by shaking off all its weight to small random values. Next the input pattern is applied and corresponding output is calculated. The calculation gives an output which is completely different from what we want because all the weights are taken randomly. We then calculate the errors which is equal to the target output which is the actual output. The error is then used to change the weight in such a way that the error will get the minimum value until the desired output is obtained.
Things to Remember
- Multilayer perceptrons (MLP) are the most popular type of neural networks which are being used today.
- They belong to a general class of structures called feedforward neural networks which is a basic type of neural network capable of approximating generic classes of functions including continuous and integrable functions.
- The MLP structure consists of grouping of the neurons into layers.
- The first and last layers are called input layers and output layers respectively since they represent input and output of the overall networks.
- The rest of the layers are called the hidden layers.
- Typically an MLP neural network consists of an input layer, one or more hidden layers and an output layer.
- Back propagation is the training or learning algorithm or multilayered perception.
MCQs
No MCQs found.
Subjective Questions
No subjective questions found.
Videos
No videos found.

Multilayer Perceptron, Back Propagation
Multilayer Perceptron
Multilayer perceptrons (MLP) are the most popular type of neural networks which are being used today. They belong to a general class of structures called feedforward neural networks which is a basic type of neural network capable of approximating generic classes of functions including continuous and integrable functions.
MLP Structure
The structure consists of grouping of the neurons into layers. The first and last layers are called input layers and output layers respectively since they represent input and output of the overall networks. The rest of the layers are called the hidden layers. Typically an MLP neural network consists of an input layer, one or more hidden layers and an output layer.
Back Propagation
It is the training or learning algorithm or multilayered perception. The network is first initialized by shaking off all its weight to small random values. Next the input pattern is applied and corresponding output is calculated. The calculation gives an output which is completely different from what we want because all the weights are taken randomly. We then calculate the errors which is equal to the target output which is the actual output. The error is then used to change the weight in such a way that the error will get the minimum value until the desired output is obtained.
Algorithm:
- Initialize the weight to small random value.
- Feed the training sample through the network and determined the final output.
- Compute the error for each output unit that is,
δ = (1 - out) (T.out - out) where out = actual output, T.out = targetted output. - Calculate the updated weight as:
wnew = wold + η δout where η = learning rate = 1. - Propagate the δ terms (error) back through the weight of the hidden layer unit.
- Calculate the new weight for the hidden layer unit.
- Repeat steps 2 to 6 for other training sets.
References:
- Elaine Rich, Kevin Knight 1991, "Artificial Intelligence".
- Nilsson, Nils J. Principles of Artificial Intelligence, Narosa Publishing House New Delhi, 1998.
- Norvig, Peter & Russel, Stuart Artificial Intelligence: A modern Approach, Prentice Hall, NJ, 1995
- Patterson, Dan W. Introduction to Artificial Intelligence and Expert Systems, Prentice Hall of India Private Limited New Delhi, 1998.
Lesson
Applications of AI
Subject
Computer Engineering
Grade
Engineering
Recent Notes
No recent notes.
Related Notes
No related notes.