PSI - Issue 42
Iryna Didych et al. / Procedia Structural Integrity 42 (2022) 1344–1349 Author name / Structural Integrity Procedia 00 (2019) 000 – 000
1346
3
Each node implements a test function with discrete results denoting the branches. Taking into account the input data, each node is tested and one of the branches is selected depending on the results. This process starts from the root and is repeated recursively until the leaf is reached, after which the value contained in the leaf will be the result. The method of boosted trees belongs to non-parametric approaches. In particular, the structure of the tree is not fixed, but the tree grows, branches and leaves are added during the training, depending on the complexity of the task. The neural network is a collection of neurons connected to each other. Each input of the neuron to which the signal arrives is the output of another one. The input signal received by the neuron is multiplied by the corresponding synaptic weight. Neural networks adjust weights through the learning process. In order to solve a certain problem, you need to choose how to connect neurons and choose the parameters of the weights on the connection. However, connected in sufficiently large network with controlled interaction, such rather simple neurons together are able to solve quite complex problems by Haykin (1999). The structure of the three-layer neural network is shown in Fig. 2, where vectors x = ( x 1 , x 2 , ..., x n ) and y = ( y 1 , y 2 , ..., y m ) are input and output signals, respectively, w ij та w jk are synaptic weights between different layers, ( b 1 , b 2 , ..., b l ) are thresholds of the hidden layer. The input signal passes to the neurons of the hidden layer, while the output signals of the hidden layer of the neural network are inputs of the third layer. The neurons of each layer are used as output signals only of the previous layer. The network uses reverse error extension algorithms to reduce learning errors.
Fig. 2. Architecture of a three-layer neural network with one hidden layer and one output layer by Wang et al. (2017).
3. Results and discussion The deformation diagram of the aluminum alloy AL-6061, which is sensitive to heat treatment, is predicted by machine learning methods according to the experimental data obtained in paper by Weaver et al. (2016). The experimental data are normalized by means of decimal logarithm making it possible to accelerate learning. In particular, if the data are not normalized, the forecasting error will increase. During learning, the data set is divided into two unequal parts − learning and test samples. The sample contains 2798 elements, 70% of which are randomly selected for the training sample from all experimental data at different temperatures, and 30% are left to assess the quality of prediction. Stress and temperature are selected as input parameters, while deformation is the output parameter. It is found that the results obtained are in good agreement with the experimental data. The dependences of the experimental deformations on the predicted ones, as well as the dependences of the deformation on the stress (Fig. 4) at temperatures T = 343 та 413 C are constructed by machine learning methods.
Made with FlippingBook - Online catalogs