What is the original Hopfield Synthetic Network (hnn)?

 What is the original Hopfield Synthetic Network (hnn)?

What is the original Hopfield Synthetic Network (hnn)?


Introduction

associative, or content-addressable, memories and a whole set of optimization problems, such as the combinatoric best route for a traveling salesman. The Figure 5.3.1 outlines a basic Hopfield network. The original network had each processing element operate in a binary format. 


This is where the elements compute the weighted sum of the inputs and quantize the output to a zero or one. These restrictions were later relaxed, in that the paradigm can use a sigmoid based transfer function for finer class distinction. Hopfield himself showed that the resulting network is equivalent to the original network designed in 1982.


1- A Hopfield Network Example

The Hopfield network uses three layers; an input buffer, a Hopfield layer, and an output layer. Each layer has the same number of processing elements. The inputs of the Hopfield layer are connected to the outputs of the corresponding processing elements in the input buffer layer through variable connection weights. 


The outputs of the Hopfield layer are connected back to the inputs of every other processing element except itself. They are also connected to the corresponding elements in the output layer. In normal recall operation, the network applies the data from the input layer through the learned connection weights to the Hopfield layer. 


2- The Hopfield layer oscillates until some fixed number

 Of cycles have been completed, and the current state of that layer is passed on to the output layer. This state matches a pattern already programmed into the network. The learning of a Hopfield network requires that a training pattern be impressed on both the input and output layers simultaneously. The recursive nature of the Hopfield layer provides a means of adjusting all of the


3-The learning rule is the Hopfield Law,

 where connections are increased when both the input and output of an Hopfield element are the same and the connection weights are decreased if the output does not match the input. Obviously, any non-binary implementation of the network must have a threshold mechanism in the transfer function, or matching input-output pairs could be too rare to train the network properly. 


The Hopfield network has two major limitations when used as a content addressable memory. First, the number of patterns that can be stored and accurately recalled is severely limited. If too many patterns are stored, the network may converge to a novel spurious pattern different from all programmed patterns. Or, it may not converge at all. 


The storage capacity limit for the network is approximately fifteen percent of the number of processing elements in the Hopfield layer. The second limitation of the paradigm is that the Hopfield layer may become unstable if the common patterns it shares are too similar. 


Here an example pattern is considered unstable if it is applied at time zero and the network converges to some other pattern from the training set. This problem can be minimized by modifying the pattern set to be more orthogonal with each other.


 4- Boltzmann Machine.

The Boltzmann machine is similar in function and operation to the Hopfield network with the addition of using a simulated annealing technique when determining the original pattern. The Boltzmann machine incorporates the concept of simulated annealing to search the pattern layer's state space for a global minimum. 


Because of this, the machine will gravitate to an improved set of values over time as data iterates through the system. Ackley, Hinton, and Sejnowski developed the Boltzmann learning rule in 1985. Like the Hopfield network, the Boltzmann machine has an associated state space energy based upon the connection weights in the pattern layer. 


5- The processes of learning a training set full 

Of patterns involves the minimization of this state space energy. Because of this, the machine will gravitate to an improved set of values for the connection weights while data iterates through the system. The Boltzmann machine requires a simulated annealing schedule, which is added to the learning process of the network. 


Just as in physical annealing, temperatures start at higher values and decreases over time. The increased temperature adds an increased noise factor into each processing element in the pattern layer. Typically, the final temperature is zero. 


6- If the network fails to settle properly, 

Adding more iterations at lower temperatures may help to get to a optimum solution. A Boltzmann machine learning at high temperature behaves much like a random model and at low temperatures it behaves like a deterministic


model. Because of the random component in annealed learning, a processing element can sometimes assume a new state value that increases rather than decreases the overall energy of the system. This mimics physical annealing and is helpful in escaping local minima and moving toward a global minimum. 


As with the Hopfield network, once a set of patterns are learned, a partial pattern can be presented to the network and it will complete the missing information. The limitation on the number of classes, being less than fifteen percent of the total processing elements in the pattern layer, still applies.


7-Hamming Network.

The Hamming network is an extension of the Hopfield network in that it adds a maximum likelihood classifier to the frond end. This network was developed by Richard Lippman in the mid 1980's. The Hamming network implements a classifier based upon least error for binary input vectors, where the error is defined by the Hamming distance. 


The Hamming distance is defined as the number of bits which differ between two corresponding, fixedlength input vectors. One input vector is the noiseless example pattern, the other is a pattern corrupted by real-world events.


 In this network architecture, the output categories are defined by a noiseless, pattern-filled training set. In the recall mode any incoming input vectors are then assigned to the category for which the distance between the example input vectors and the current input vector is minimum.


8-The Hamming network has three layers. 

There is an example network shown in Figure 5.3.2. The network uses an input layer with as many nodes as there are separate binary features. It has a category layer, which is the Hopfield layer, with as many nodes as there are categories, or classes. 


Conclusion

This differs significantly from the formal Hopfield architecture, which has as many nodes in the middle layer as there are input nodes. And finally, there is an output layer which matches the number of nodes in the category layer. 


The network is a simple feedforward architecture with the input layer fully connected to the category layer. Each processing element in the category layer is connected back to every other element in that same layer, as well as to a direct connection to the output processing element. The output from the category layer to the output layer is done through competition.

Post a Comment

Previous Post Next Post