Traditional Culture Encyclopedia - Traditional stories - What is an artificial neural network algorithm?

What is an artificial neural network algorithm?

Many algorithms of artificial neural network have been widely used in intelligent information processing systems, especially the following four algorithms: ART network, LVQ network, Kohonen network and Hopfield network. The following are the details of these four algorithms:

1. adaptive resonance theory network

Adaptive Resonance Theory (ART) networks have different schemes. The ART- 1 network consists of two layers, the input layer and the output layer. These two layers are completely interconnected, and the connection is carried out in the forward (bottom-up) and feedback (top-down) directions.

When ART- 1 network works, its training is continuous, including the following algorithm steps:

(1) For all output neurons, if all alarm weights of an output neuron are set to 1, it is called an independent neuron because it is not designated to represent any pattern type.

(2) A new input mode X is given.

(3) Make all output neurons participate in the exciting competition.

(4) Find the winning output neuron from the competing neurons, that is, the X W value of this neuron is the largest; The winning neuron may be an independent neuron at the beginning of training or when there is no better output neuron.

(5) Check whether the input pattern x is sufficiently similar to the warning vector v of the winning neuron.

(6) If r≥p, there is resonance, and go to step (7); Otherwise, the winning neurons can't compete further for the time being, so go to step (4) and repeat this process until there are no more capable neurons.

2. Learning vector quantization (LVQ) network

Learning Vector Quantization (LVQ) network consists of three layers of neurons, namely, input conversion layer, hidden layer and output layer. The network is completely connected between the input layer and the hidden layer, but partially connected between the hidden layer and the output layer, and each output neuron is connected with different groups of hidden neurons.

The simplest LVQ training steps are as follows:

(1) Presets the initial weight of the reference vector.

(2) Provide a training input mode for the network.

(3) Calculating the Euclidean distance between the input pattern and each reference vector.

(4) Updating the weight of the reference vector closest to the input mode (that is, the reference vector of the winning hidden neuron). If the winning hidden neuron belongs to the buffer connected to the output neuron of the same class as the input mode, then the reference vector should be closer to the input mode. Otherwise, the reference vector leaves the input mode.

(5) Go to step (2) and repeat the process with new training input patterns until all training patterns are correctly classified or meet a certain termination standard.

3. Kohonen Network

Kohonen network or self-organizing feature mapping network consists of two layers, one is the input buffer layer for receiving input patterns, and the other is the output layer. The neurons in the output layer are generally arranged in a regular two-dimensional array, and each output neuron is connected with all input neurons. The connection weights form the components of the reference vector connected with known output neurons.

Training Kohonen network includes the following steps:

(1) preset a small random initial value for the reference vectors of all output neurons.

(2) Provide a training input mode for the network.

(3) Determine the winning output neuron, that is, the neuron whose reference vector is closest to the input mode. The Euclidean distance between the reference vector and the input vector is usually used as the distance measure.

(4) Updating the reference vector of the winning neuron and its adjacent reference vectors. These reference vectors are closer to the input vectors. For the winning reference vector, the adjustment range is the largest, while for distant neurons, the neighborhood size of neurons decreases with training, and only the reference vector of winning neurons is adjusted at the end of training.

4.Hopfield network

Hopfield network is a typical recursive network, which usually only accepts binary input (0 or 1) and bipolar input (+1 or-1). It contains a single neuron, and each neuron is connected with all other neurons to form a recursive structure.