PSI - Issue 60

S. Mahesh et al. / Procedia Structural Integrity 60 (2024) 382–389 Mahesh et al. / Structural Integrity Procedia 00 (2019) 000 – 000

385

4

Mohanty et al. (2009) used back propagation neural network (BPNN) algorithm to pred ict the crack length ‘ a ’, for applied number of load cycles ‘ N ’ . Although the prediction indicates good success, it is unclear as to why such a huge network of nodes and layers were used in the architecture. Fatigue crack growth behaviour of Al 2014 alloy was studied by Raja et al. (2020) using BPNN and extreme learning machine (EML) algorithms. It was found that the BPNN model approximates the local data points (points within the range of the fed training data) accurately whereas, the ELM model precisely extrapolates the unstable crack growth region. Zhang et al. (2021) used machine learning-based real time fatigue crack growth detection method that combined computer vision and machine leaning techniques to predict the crack growth and its direction. In comparison to various ML models it was found that the decision tree algorithm was best suited for this application. Wang et al. (2017) applied the various deep learning algorithms for predicting the entire sigmoidal shaped FCGR curve. Algorithms such as extreme learning machine (ELM), radial basis function network (RBFN) and genetic algorithms optimized back propagation network (GABP) were used for this purpose. The prediction performance of each of the algorithms are compared along with the two parameter model, also known as K * approach, where K * is the stress intensity factor and more details of its usefulness can be found in the work of D. Kujawski. (2001). The results show that the prediction accuracy and effectiveness of machine learning algorithms (MLAs) are better than that of K * approach. The present work focuses on predicting fatigue crack growth rate in the Paris region using BPNN. For comparison purpose, experimental data from the literature is used. BPNN performs the process of error back propagation for learning the patterns and dependencies across variables. Thus, the name back propagation neural network. The architecture of a typical BPNN is shown in Fig. 2. The architecture consists of an input layer, single or multiple hidden layers based on the complexity of the problem and an output layer/s. The nodes/neurons in the input layer represents the input features selected to train the model and the output nodes represents the features to be predicted. Hidden nodes perform the computations necessary in mapping and transforming the input information into output. This is achieved using a dense interconnection of neurons along with weights and bias which are responsible for building this correlation. The activation functions of the hidden nodes are similar to Boolean switches as they decide the activation or deactivation of the neuron based on the threshold. Further on the working principle of BPNN algorithm can be found in the work of Skansi (2018), Goodfellow et al. (2016) and Géron (2022).

Fig. 3. Flowchart of the BPNN algorithm used to predict FCGR in Paris regime

Made with FlippingBook Learn more on our blog