PSI - Issue 28

Oleh Yasniy et al. / Procedia Structural Integrity 28 (2020) 1392–1398 Oleh Yasniy et al. / Structural Integrity Procedia 00 (2019) 000–000

1395

4

NN determines the coefficients of connections between neurons. The multilayer perceptron algorithm is based on the following formulas by Haykin (1999):

jl m NET w x    mjl

(1)

mjl

( OUT F NET 

)

jl

jl

jl

(2)

x

OUT

 

(3)

( 1) mj l

ml

The basic parameters of NN are its topology, algorithm of training and the functions of the neurons activation. In the current study, the sum of squares error function (SOS) was chosen and the training method was Broyden–Fletcher– Goldfarb–Shanno (BFGS) by Gurney (1997), Richard (1998), Goodfellow et al. (2016). In particular, the hidden activation function is tangential and the function of output activation is logarithmic. In addition, the correlation coefficient between the actual and the predicted values of test set was 0.97. The stop parameter of learning network was the number of epochs, which in this study was equal to 1000. The aim of the boosted trees algorithm is to reflect the natural thinking process of a person while making a decision by Mitchell (1997). In particular, the algorithm allows creating a model that predicts the value of a target variable based on several input variables. The data obtained as a result of applying boosted trees method are easy to interpret. The algorithm of boosted trees construction consists of creating and cancellation stages of trees. In creating trees, one chooses the criteria of splitting and stopping of learning. Often, as a result of this algorithm, too detailed trees are obtained, which in the future will lead to large errors. This is due to the problem of overfitting. Therefore, some branches are being cut down. This reduces the size of decision trees that have little influence on making a decision. The method of boosted trees is used when the results of one decision affect the next. The advantage of this method is the ability to work with a large dataset without their preparation, in particular, without normalization. The basic idea of the support–vector machines algorithm is very simple by Smola et al. (2010). The learning algorithm creates a hyperplane or a straight line, by which it breaks the training data into two classes. The points closest to this straight line are called support vectors. In particular, the best straight line is the one for which the distance to the support vectors is the maximum. The basic parameter here is the regularization parameter, which corrects the boundary between the curve of the straight line and the accuracy of the classification of the elements of the training sample. The larger the parameter value, the more wavy the straight line is in the model and the more accurate it will classify the training sample objects. Therefore, it is important to correctly select the model parameter for a particular dataset in order to achieve high accuracy. A loss function defines as errors only those predicted values that are far from the training data at the distance greater than  , and a kernel parameter  that determines how each element of training data affects the generation of a straight line. Therefore, the smaller the  , the more objects are involved in the choice of a straight line. If, on the contrary, it is large, then the algorithm will take into account only those elements that are the closest to the line. In the method of support-vector machines, the points that are the closest to each other are given greater weights when making a decision. Therefore, by choosing the right parameters, one can achieve high accuracy. In this study, the regularization parameter was 10,  was equal to 0.1, and  was taken equal to 1, whereas the number of support vectors was 24. Radial basis function (RBF) was chosen as the kernel function. The algorithm of k -nearest neighbors method is based on a comparison of known elements with new ones. In particular, its basic idea is that the new object to be predicted belongs to the class that is most common among the k– nearest neighbors of the training sample. In this study, the distance between k -nearest neighbors is Euclidean. 3. Experimental Approach Smooth cylindrical specimens of AMg6 alloy were subjected to tensile loading on a STM-100 electrohydraulic machine at a temperature of 293 K. The specimens with a diameter of diameter 10 and a working part 25 mm long were manufactured by turning bars in the delivery state. The tensile loading was performed with a rate that was equal

Made with FlippingBook Ebook Creator