Computer Network Data Link Layer Hamming's Code Workflow
The Hamming network is an extension of the Hopfield network in that it adds a maximum likelihood classifier to the frond end. This network was developed by Richard Lippman in the mid 1980's. The Hamming network implements a classifier based upon least error for binary input vectors, where the error is defined by the Hamming distance.
1- Hamming Network.
The Hamming distance is defined as the number of bits which differ between two corresponding, fixedlength input vectors. One input vector is the noiseless example pattern, the other is a pattern corrupted by real-world events. In this network architecture, the output categories are defined by a noiseless, pattern-filled training set.
In the recall mode any incoming input vectors are then assigned to the category for which the distance between the example input vectors and the current input vector is minimum.
2- The Hamming network has three layers.
There is an example network shown in Figure 5.3.2. The network uses an input layer with as many nodes as there are separate binary features. It has a category layer, which is the Hopfield layer, with as many nodes as there are categories, or classes.
This differs significantly from the formal Hopfield architecture, which has as many nodes in the middle layer as there are input nodes. And finally, there is an output layer which matches the number of nodes in the category layer.
The network is a simple feedforward architecture with the input layer fully connected to the category layer. Each processing element in the category layer is connected back to every other element in that same layer, as well as to a direct connection to the output processing element. The output from the category layer to the output layer is done through competition.
3- A Hamming Network Example
The learning of a Hamming network is similar to the Hopfield methodology in that it requires a single-pass training set. However, in this supervised paradigm, the desired training pattern is impressed upon the input layer while the desired class to which the vector belongs is impressed upon the output layer.
Here the output contains only the category output to which the input vector belongs. Again, the recursive nature of the Hopfield layer provides a means of adjusting all connection weights. The connection weights are first set in the input to category layer such that the matching scores generated by the outputs of the category processing elements are equal to the number of input nodes minus the
Hamming distances to the example input vectors. These matching scores range from zero to the total number of input elements and are highest for those input vectors which best match the learned patterns. The category layer's recursive connection weights are trained in the same manner as in the Hopfield network.
4- In normal feedforward operation
An input vector is applied to the input layer and must be presented long enough to allow the matching score outputs of the lower input to category subnet to settle. This will initialize the input to the Hopfield function in the category layer and allow that portion of the subnet to find the closest class to which the input vector belongs.
This layer is competitive, so only one output is enabled at a time. The Hamming network has a number of advantages over the Hopfield network. It implements the optimum minimum error classifier when input bit errors are random and independent. So, the Hopfield with its random set 53 up nature can only be as good a solution as the Hamming
5- Bi-directional Associative Memory.
up nature can only be as good a solution as the Hamming, or it can be worse. Fewer processing elements are required for the Hamming solution, since the middle layer only requires one element per category, instead of an element for each input node.
And finally, the Hamming network does not suffer from spurious classifications which may occur in the Hopfield network. All in all, the Hamming network is both faster and more accurate than the Hopfield network.
This network model was developed by Bart Kosko and again generalizes the Hopfield model. A set of paired patterns are learned with the patterns represented as bipolar vectors. Like the Hopfield, when a noisy version of one pattern is presented, the closest pattern associated with it is determined.
6- Bi-directional Associative Memory Example.
A diagram of an example bi-directional associative memory is shown in Figure 5.3.4. It has as many inputs as output processing nodes. The two hidden layers are made up of two separate associated memories and represent the size of two input vectors.
The two lengths need not be the same, although this examples shows identical input vector lengths of four each. The middle layers are fully connected to each other. The input and output layers are for implementation purposes the means to enter and retrieve information from the network. Kosko original work targeted the bi-
directional associative memory layers for optical processing, which would not need formal input and output structures. The middle layers are designed to store associated pairs of vectors. When a noisy pattern vector is impressed upon the input, the middle layers oscillate back and forth until a stable equilibrium state is reached.
This state, providing the network is not over trained, corresponds to the closest learned association and will generate the original training pattern on the output. Like the Hopfield network, the bi-directional associative memory network is susceptible to incorrectly finding a trained pattern when complements of the training set are used as the unknown input vector.
Post a Comment