*The static adjustment of a neural network previous training of a nuclear neural network is considered. The purpose of static adjustment is:*

*reduction of input symbols real levels to the operating range of the neural network by the expedient choice of synoptic weights and displacement in the first layer;**reduction of the network output signals to the range of true values by the expedient choice of synoptic weights and displacement in the last layer;**setting the initial state of the network into the area of general position, that is, maximum sensitivity to parameter variation by optimal choice of initial values of displacements and synoptic weights for the neurons of hidden layers.*

*By static adjustment the choice of synoptic scales and displacement for the first and the latent layers of a neural network is carried out.*

*Further the basic mathematical parities which are used at performance of algorithm of training of a nuclear neural network are resulted. In the training algorithm along with synoptic chart adjustment the nonlinear functions selection by changing their arguments displacement is performed.*

At designing of information systems on the basis of neural networks the training of nuclear neural networks is played the important role.

Before training it is necessary to set initial values of weight factors in the neural network in this or other way. The weight factors are usually initialized randomly. Statistical adjustment is intended to improve the initialization algorithm on the basis of additional information about the data [1,2]. The purpose of static adjustment is:

- reduction of input symbols real levels to the operating range of the neural network by the expedient choice of synoptic weights and displacement in the first layer;
- reduction of the network output signals to the range of true values by the expedient choice of synoptic weights and displacement in the last layer;
- setting the initial state of the network into the area of general position, that is, maximum sensitivity to parameter variation by optimal choice of initial values of displacements and synoptic weights for the neurons of hidden layers.

Let us mark the weight factors of synoptic maps for layer m as: *wi ^{m}*(

*a*,

*b*)

_{(1)}

where *i* is the number of the nucleus, and *m* is number of the layer.

Data processing for the nucleus * ^{Ai}* is defined by the expression

In the training algorithm along with synoptic chart adjustment the nonlinear functions selection by changing their arguments displacement is performed. Formally, displacement is realized by adding a fictitious (dummy) *x ^{m}* (*)

* ^{xi}* coordinate, which usually has a constant value equal to +1, and, besides,

*w ^{m}*(*,

*b*) for each neuron nucleus an adjusted synoptic weight value

*is added.*

^{wi b}This technique allows you not to change the form of the expression (2), assuming that the number of summands in this sum is increased by one.

Static adjustment of the first layer Source information:

- For each input variable
^{x}^{(}^{u}^{)}the following is considered to be known

- the average value of the variable:
^{x} - change range: ʌ
^{х}

*Optimality principles:*

34

35

36

Here *γ* - is the learning (educational) quotient, which is usually determined empirically.

**The offered model of training provides the process of neural network adaptation to achieve the minimum of a certain estimating functional, for example, the solution quality of the assigned task by network.**

**BIBLIOGRAPHY**

- Tenk D.U., Khopfild D.D.Collective calculation in neural electronic schemes//In the science world. 1988, № 2, p. 44-53.
- Kussul V. M, Baidyk T.N. Working out of architecture neural networks for recognition of the form of objects on the image//Automatics. 1990, №5, p.56-61.
- Dzheffri E. Hinton. As neural networks are trained //In the science world. 1992, №11, №12, p. 103-107.