What is an adaptive resonance network and what is its role

What is an adaptive resonance network and what is its role

What is an adaptive resonance network and what is its role

Introduction
Developed by Stephen Grossberg in the mid 1970's, the network creates categories of input data based on adaptive resonance. The topology is biologically plausible and uses an unsupervised learning function. It analyses behaviorally significant input data and detects possible features or classifies patterns in the input vector. 


This network was the basis for many other network paradigms, such as counter-propagation and bi-directional associative memory networks. The heart of the adaptive resonance network consists of two highly interconnected layers of processing elements located between an input and output layer. 


Each input pattern to the lower resonance layer will induce an expected pattern to be sent from the upper layer to the lower layer to influence the next input. This creates a "resonance" between the lower and upper layers to facilitate network adaption of patterns. 


1- The network is normally used in biological modelling, 
however, some engineering applications do exist. The major limitation to the network architecture is its noise susceptibility. Even a small amount of noise on the input vector confuses the pattern matching capabilities of a trained network. The adaptive resonance theory network topology is protected by a patent held by the University of Boston.


Self-Organizing Map.2- 
Developed by Teuvo Kohonen in the early 1980's, the input data is projected to a two-dimensional layer which preserves order, compacts sparce data, and spreads out dense data. In other words, if two input vectors are close, they will be mapped to processing elements that are close together i n the two-dimensional Kohonen layer that represents the features or clusters of the input data.


Here, the processing elements represent a two-dimensional map of the input data. The primary use of the self-organizing map is to visualize topologies and hierarchical structures of higher-order dimensional input spaces. The self-organizing network has been used to create area-filled curves in twodimensional space created by the Kohonen layer. 


The Kohonen layer can also be used for optimization problems by allowing the connection weights to settle out into a minimum energy pattern. A key difference between this network and many other networks is that the self-organizing map learns without supervision. 


3- However, when the topology is combined with 
Other neural layers for prediction or categorization, the network first learns in an unsupervised manner and then switches to a supervised mode for the trained network to which it is attached.


An example self-organizing map network is shown in Figure 4- . 

The self-organizing map has typically two layers. The input layer is fully connected to a two-dimensional Kohonen layer. The output layer shown here is used in a categorization problem and represents three classes to which the input vector can belong. This output layer typically learns using the delta rule and is similar in operation to the counter-propagation paradigm.


5- An Example Self-organizing Map Network.
The Kohonen layer processing elements each measure the Euclidean distance of its weights from the incoming input values. During recall, the Kohonen element with the minimum distance is the winner and outputs a one to the output layer, if any. 

This is a competitive win, so all other processing elements are forced to zero for that input vector. Thus the winning processing element is, in a measurable way, the closest to the input value and thus represents the input value in the Kohonen two-dimensional map. 


So the input data, which may have many dimensions, comes to be represented by a two-dimensional vector which preserves the order of the higher dimensional input data. This can be thought of as an order-perserving projection of the input space onto the two-dimensional Kohonen layer.


6- During training, the Kohonen processing element with 
The smallest distance adjusts its weight to be closer to the values of the input data. The neighbors of the winning element also adjust their weights to be closer to the same input data vector. The adjustment of neighboring processing elements is instrumental in preserving the order of the input space. 


Training is done with the competitive Kohonen learning law described in counterpropagation. The problem of having one processing element take over for a region and representing too much input data exists in this paradigm. As with counter-propagation, this problem is solved by a conscience mechanism built into the learning function. 


The conscience rule depends on keeping a record of how often each Kohonen processing element wins and this information is then used during training to bias the distance measurement. This conscience mechanism helps the Kohonen layer achieve its strongest benefit. 


7- The processing elements naturally represent approximately 
equal information about the input data set. Where the input space has sparse data, the representation is compacted in the Kohonen space, or map. 

Where the input space has high density, the representative Kohonen elements spread out to allow finer discrimination. In this way the Kohonen layer is thought to mimic the knowledge representation of biological systems.


Conclusion
The last major type of network is data filtering. An early network, the MADALINE, belongs in this category. The MADALINE removed the echoes from a phone line through a dynamic echo cancellation circuit. 

More recent work has enabled modems to work reliably at 4800 and 9600 baud through dynamic equalization techniques. Both of these applications utilize neural networks which were incorporated into special purpose chips.

Post a Comment

Previous Post Next Post