Issue 70

M.Verezhak et alii, Frattura ed Integrità Strutturale, 70 (2024) 121-132; DOI: 10.3221/IGF-ESIS.70.07

A sigmoid transfer function was used for the output layer. All hidden layers of the neural network have the same functional form of transfer functions, which was one of the following 4 mappings: sin(z), Leaky ReLU, Swish or Sigmoid. The optimal of the mappings was used to verify the resulting deep learning model. The class of functions is represented as the following analytical expressions in the Tab. 4.

Function

Representation

l ( ) ( )

 ( ) ( ) sin( ) l l z z

Sine

f

k

k

   

l ( )

l ( )

 ( ) ( ) , 0 l l k k z

z

k

k

l ( ) ( ) l

f

k ( ) z

Leaky ReLU

z z

0.01 ,

0



l ( )

z

l ( ) ( ) l

k

f

z

(

)

Swish

k

 1 exp(- ) l ( ) z

k

1

l ( ) ( ) l

f

k ( ) z

Sigmoid

 1 exp(- ) l ( ) z

k

Table 4: Analytical representation of transfer functions.

   o res res

max ; ;

y

h

 pass overlap x N f w ; ; and output vector

of the ANN are normalized using MinMax:

Input vector

 x y ( , ) ( , ) ( , ) ( , )   k x y x y x y

(4)

x y ( , )

 k

k

k

k

where   I II k x y , , are reduced values of minimal and maximal of the corresponding input and output quantities calculated as:

  

 ave

x y , ( , )

(5)

x y ( , )

k

k

ave ( , ) , ( , ) x y x y

ave

max

min

(6)

 

x y 1.1 max ( , )

x y ( , )

k

k

k

k

,where   ave k x y , are average values of input and output variables calculated from the training dataset. Likewise to [21], the binary cross-entropy function was used as a loss function for training the neural network, the analytical representation of which is given by the following expression:                           L N ANN ANN n n n n B n S y y x y y x 1 1 ln (1 ) ln (7) The ANN model was trained using back propagation algorithm. In order to build the optimal model, 13 variants of ANN structures were investigated with 10 samples of ANN for each particular structure. From the trained ANN samples, the optimally designed structure was selected in terms of complexity and prediction accuracy. The root mean square errors calculated basing on 10 ANN samples were analyzed separately on training and validation data, as well as the maximum errors of the ANN.  k x y , max,min are the true values minimum and maximum,  where  ANN y x ( ) is the current prediction of the ANN basing on the input vector  x .

126

Made with FlippingBook Digital Publishing Software