PSI - Issue 60

S. Mahesh et al. / Procedia Structural Integrity 60 (2024) 382–389 Mahesh et al. / Structural Integrity Procedia 00 (2019) 000 – 000

386

5

2. Methodology There are various steps in the process of making predictions using machine learning algorithms. Major steps involved in this process is highlighted in the flow chart shown in Fig. 3 and the same is discussed in the subsequent subsections. The data for ML purposes is usually generated via experimentation or obtained through various standard handbooks and peer-reviewed publications. In the current study the data of nickel based super alloys GTM 720 and GTM 718 is obtained from the tests performed by Malipatil et al. (2021) to study the fatigue crack growth behaviour and damage tolerance behaviour in these aero-space alloys. 2.1. Data pre-processing The data for the current work is determined and tabulated. The region of interest in the present work is the Paris regime since it depicts linear crack growth phase during load application, and is relatively simple to model and subsequently perform life prediction. The input data for training the model is shown in Table 1. It is clear that the amount of data available for training a machine learning model is insufficient. Nevertheless, the present work is a preliminary study in the application of ML for the prediction of material behaviour and hence this data was used. For future work, more exhaustive data in a similar fashion needs to be gathered. It is clear from the training data that processing prior to training is essential. A standard procedure of data processing such as scaling and normalizing appropriately to give equal weightage to all the features was performed. Table 1: FCGR data - input for training the ML algorithm

Material

Yield strength (MPa)

Ultimate strength (MPa)

Stress ratio (R)

C (intercept)

m (slope)

0.1 0.3 0.5 0.7 0.1 0.3 0.5 0.7

5.48E-13 2.70E-11 3.88E-10 6.56E-09 3.06E-09 7.18E-09 6.81E-09 6.32E-09

5.55 4.62 3.99 3.32

GTM 720 Malipatil et al. (2021)

1100

1530

2.953 3.045 3.078 3.205

GTM 718 Malipatil et al. (2021)

1034

1241

2.2. Building the model As the process of building, training, testing and error minimization of the model is interlinked, this section provides an insight on these stages of the process (fig. 3). The BPNN architecture used here is an artificial neural network (shown in Fig. 2) with one input layer, three hidden layers and an output layer. The three input nodes, in the input layer, represent the features selected for training the model viz., yield strength, ultimate strength and stress ratio while the output nodes represent the Paris constants C and m . The three hidden layers consists of twenty-four, twenty-five and twenty-five nodes respectively in each of its layers. The activation function used in this case is the rectified linear unit (ReLU) which is said to perform well for regression models. A ReLU function returns zero if the input is negative or the “raw” (actual) input otherwise (Skansi (2018)). Mean squared error loss function is used at the end of each iteration to determine the error. Stochastic gradient descent is used to optimize the model in achieving its accuracy through the process of error “back propagation”. These m odel parameters are decided by trial-and-error approach. The hyper-parameters of the model are randomly initiated and

Made with FlippingBook Learn more on our blog