Prototyping and analyzing vector quantization network learning agglomeration algorithm

Prototyping and analyzing vector quantization network learning agglomeration algorithm 

Prototyping and analyzing vector quantization network learning agglomeration algorithm

1- Networks for Classification 

The previous section describes networks that attempt to make projections of the future. But understanding trends and what impacts those trends might have is only one of several types of applications. The second class of applications is classification. 


A network that can classify could be used in the medical industry to process both lab results and doctor-recorded patience symptoms to determine the most likely disease. Other applications can separate the "tire kicker" inquiries from the requests for information from real buyers.


2- Learning Vector Quantization

This network topology was originally suggested by Tuevo Kohonen in the mid 80's, well after his original work in self-organizing maps. Both this network and self-organizing maps are based on the Kohonen layer, which is capable of sorting items into appropriate categories of similar objects. 


Specifically, Learning Vector Quantization is a artificial neural network model used both for classification and image segmentation problems. Topologically, the network contains an input layer, a single Kohonen layer and an output layer. 


 3- An example network is shown in Figure . 

The output layer has as many processing elements as there are distinct categories, or classes. The Kohonen layer has a number of processing elements grouped for each of these classes. The number of processing elements per class depends upon the complexity of the input-output relationship. Usually, each class will have the same number of elements throughout the layer. 


It is the Kohonen layer that learns and performs relational classifications with the aid of a training set. This network uses supervised learning rules. However, these rules vary significantly from the back-propagation rules. To optimize the learning and recall functions, the input layer should contain only one processing element for each separable input parameter. 


Higher-order input structures could also be used. Learning Vector Quantization classifies its input data into groupings that it determines. Essentially, it maps an n-dimensional space into an mdimensional space. That is it takes n inputs and produces m outputs. The networks can be trained to classify inputs while preserving the inherent topology of the training set. 


Topology preserving maps preserve nearest neighbor relationships in the training set such that input patterns which have not been previously learned will be categorized by their nearest neighbors in the training data.


 4- An Example Learning Vector Quantization Network.

 In the training mode, this supervised network uses the Kohonen layer such that the distance of a training vector to each processing element is computed and the nearest processing element is declared the winner. There is only one winner for the whole layer. 


The winner will enable only one output processing element to fire, announcing the class or category the input vector belonged to. If the winning element is in the expected class of the training vector, it is reinforced toward the training vector. 


If the winning element is not in the class of the training vector, the connection weights entering the processing element are moved away from the training vector. This later operation is referred to as repulsion. During this training process, individual processing elements assigned to a particular class migrate to the region associated with their specific class. 


5- During the recall mode, the distance

 of an input vector to each processing element is computed and again the nearest element is declared the winner. That in turn generates one output, signifying a particular class found by the network. There are some shortcomings with the Learning Vector Quantization architecture. 


Obviously, for complex classification problems with similar objects or input vectors, the network requires a large Kohonen layer with many processing elements per class. This can be overcome with selectively better choices for, or higher-order representation of, the input parameters.


 6- The learning mechanisms has some weaknesses 

which have been addressed by variants to the paradigm. Normally these variants are applied at different phases of the learning process. They imbue a conscience mechanism, a boundary adjustment algorithm, and an attraction function at different points while training the network. 


The simple form of the Learning Vector Quantization network suffers from the defect that some processing elements tend to win too often while others, in effect, do nothing. This particularly happens when the processing elements begin far from the training vectors. Here, some elements are drawn in close very quickly and the others remain permanently far away. 


7- To alleviate this problem, a conscience mechanism

 is added so that a processing element which wins too often develops a "guilty conscience" and is penalized. The actual conscience mechanism is a distance bias which is added to each processing element. This distance bias is proportional to the difference between the win frequency of an element and the average processing element win frequency. 


As the network progresses along its learning curve, this bias proportionality factors needs to be decreased. The boundary adjustment algorithm is used to refine a solution once a relatively good solution has been found. This algorithm effects the cases when the winning processing element is in the wrong class and the second best processing element is in the right class. 


 8- A further limitation is that the training vector 

must be near the midpoint of space joining these two processing elements. The winning wrong processing element is moved away from the training vector and the second place element is moved toward the training vector. This procedure refines the boundary between regions where poor classifications commonly occur. In the early training of the Learning Vector Quantization network, it is some times desirable to turn off the repulsion. 


Conclusion

The winning processing element is only moved toward the training vector if the training vector and the winning processing element are in the same class. This option is particularly helpful when a processing element must move across a region having a different class in order to reach the region where it is needed. 5.2.2 Counter-propagation Network. 


Robert Hecht-Nielsen developed the counter-propagation network as a means to combine an unsupervised Kohonen layer with a teachable output layer. This is yet another topology to synthesize complex classification problems, while trying to minimize the number of processing elements and training time. 


The operation for the counter-propagation network is similar to that of the Learning Vector Quantization network in that the middle Kohonen layer acts as an adaptive look-up table, finding the closest fit to an input stimulus and outputting its equivalent mapping.


Post a Comment

Previous Post Next Post