PSI - Issue 5
Alexander Serov et al. / Procedia Structural Integrity 5 (2017) 1160–1167 Alexander Serov / Structural Integrity Procedia 00 (2017) 000 – 000
1162
3
able to learn their own patterns from the scratch. Another one key point for the choice of the class of neural networks is associated with ability to construct dynamic models. Neural networks must be able to learn in data streams. They must realize Life-Long Machine Learning methods and must be able to implement context based learning. Numerical technology proposed here is based on results of investigation of biological brains. These investigations were applied in the field of Artificial Intelligence as Spiking Neuron Models, Maass (1997), Pyle and Rosenbaum (2017), Jin et al. (2008), Merolla et al. (2014). Architecture of Spiking Neural Networks which we are considering here is dependent upon the time. Key ideological principle used at creation of this architecture is as follows: the appearance of a new element of knowledge requires the appearing of new elements in the structure of neural network. New knowledge of AI based system is stored in the network by arising of new neuron or by arising of new connection between existing neurons. The process of appearance of new structural elements is a threshold process. Quantity which controls this process is one of most important parameters which must be tuned for definite field of application of neural network. Proposed architecture of neural network assumes gradual accumulation of knowledge, which is expressed in an increase of total number of network elements. Evolution of neural network includes several epochs. Each separate epoch is characterized by arising of a new layer of neurons in the structure of network. Each layer of neurons fixes by its arising the appearance of patterns which have more high level of hierarchy than previous ones. In contrast to current classification of ANN on cyclic (recursive) and acyclic (feedforward) architectures, proposed architecture cannot be attributed with any of these classes. This is related with fact that change of network structure is not based on the definite architecture of connections between elements of network. Architecture of the network has dynamics which is determined by the definite experience of perception and learning. Hence principally it is possible the arising of both cyclically and acyclically connected elements of network. Driving force of evolution of neural network interconnections includes several components. Accuracy of prediction of outer world dynamics is one of these components. This prediction is necessary for planning actions by intelligent technical system. And planning is connected with goal-setting. Here we will not go into discussion of these problems. We will consider that construction of new layers of abstraction is completed when the error on a predetermined depth of prediction doesn’t exceed the value of accuracy limit which is required for solution of current tasks by intelligent system. The model of a neuron which we use in our model is as follows. Each neuron has several inputs and single output. Each neuron may be in one of two states: activated and deactivated. If neuron has deactivated state it cannot emit spike. Neuron makes processing of input signals each time when its input is changed by the network. Each neuron has its own logic of input data processing. Processing of input signals produces some scalar value y : f z y i i , where i is the index, unique identifier of neuron in the network; z = ( z 1 , …, z N ) is vector which characterizes input signals; N is a total number of inputs of i -th neuron. Neuron passes into the activated state in the case if: y i > Th i , where Th i is the value of activation potential. If neuron has been activated it becomes emit spikes. Dynamic Artificial Neural Network (DANN) in general case has several layers of neurons: input layer, output layer and several intermediate layers. Neurons of first layer may be characterized as perceptive neurons, they receive signals from sensor system. Input layer of neurons is constructed as a set of filters. Neurons of this layer are characterized by zero value of activation potential. Second layer of neurons is constructed on the basis of information about purposes of the use of DANN. Function of this layer is to make pre-processing of signals coming from the input layer. Intermediate layers of neural network are created for representation of various space and time patterns. These patterns become parts of hierarchical structure which is used by cognitive system for perception, learning and understanding dynamics of observed world. Output layer of neurons is constructed for representation of most high level patterns which neural network learns from observations. Structure of DANN is constructed dynamically from the scratch on the basis of the set of input signals. Construction of layers of network is initialized when neural network starts processing input signals. Methods of learning of Dynamic Artificial Neural Networks may be developed on the basis of Harmony Theory formulated by Paul Smolensky, Smolensky (1986). The core of this theory is formulated as a harmony principle: the cognitive system is an engine for activating coherent assemblies of atoms and drawing inferences that are consistent with the knowledge represented by the activated atoms. The mathematics of Harmony Theory is founded on well known concepts of cognitive science: inference through activation of schemata. Schemata are coherent assemblies of knowledge atoms which are the means for supporting inference of knowledge. Knowledge atoms are fragments of
Made with FlippingBook - Online catalogs