PSI - Issue 52
Wu Zonghui et al. / Procedia Structural Integrity 52 (2024) 203–213 Author name / Structural Integrity Procedia 00 (2019) 000 – 000
204
2
(2021)). ANN has been proven as an effective method to approximate the limit state function (Gomes H M et al. (2004), Elhewy A H et al. (2006), Dai H Z et al. (2015)). Available methods for approaching the limit state function by ANN can be categorized into two main groups: functional approximation and data classification. Jorge E. Hurtado and Diego A. Alvarez (2001) have compared these two configurations and found that functional approximation takes more training time but performs better. The Monte Carlo simulation(MCS) method is a powerful simulation technique in computing failure probabilities of highly non-linear complex models, which has a pretty good precision in approaching the exact value with a large amount of data. Many researchers have verified its effective application in reliability analysis (Mao G J et al. (2022)) and proposed various variance reduction techniques (Thedy J et al. (2021), Rashki M (2021), Xu C L et al. (2020)). Recently, taking ANN as LSF approximating method and MCS as a P f computing technique has been broadly used in highly non-linear problems. Chojaczyk A.A. et al. (2015) have computed P f of stiffened plates adopted ANN + first order reliability method + MCS. H. Abbasianjahromi and S. Shojaeikhah (2021) have assessed steel four-bolt unstiffened extended end-plate connections by using MCS and ANN. Meanwhile, many researchers have applied this combination to various complex models (Li W Z et al. (2022), Jha B N et al. (2017)). Previous studies show the great power of ANN in approaching the LSF, while they mostly take mean square error as loss function(Hurtado J E et al. (2001), Chojaczyk A A et al. (2015), Vazirizade S M et al. (2017), Vasudevan H et al. (2018)). This loss function has a large gradient when the prediction value is far from the true value, which results in a rapid rate of convergence and great performance in accurate samples. However, there are so much incorrect data in practical engineering, and they are hard to identify due to the high nonlinearity of the model. Thus, a robust loss function is presented in this article, and compared with the traditional loss function to show its capacity. In practical engineering, the analysis of a structure is usually time-consuming or resource-wasting, especially the dynamic implicit model which is pretty hard to get the analytical expression of its LSF and it is a high-consumption work whether in finite element analysis or field tests. Hence, there is a high demand to reduce the sample size. The Latin hypercube sampling(LHS) (Helton J C et al. (2003), Chen J J et al. (2018)) and the uniform experimental design(UED) (Fang. K-T et al. (2018), Liu Q et al. (2011)) are taken as the sample size-reducing method in this article. In this paper, an advanced loss function is proposed and two cases are taken to prove its robustness. Case one is an explicit problem, which is easy to compute its reliability index by MCS or calculus. It is taken to show ANN's ability in reliability analysis. And a marine lubricating oil cooler under a plus-minus triangular wave and internal pressure(1Mpa) is considered herein as representative of the dynamic implicit model, one kind of high nonlinear problem. In this application, ANN is used for LSF approximation and combined with MCS for reliability assessment, and its performance is judged by comparing the deviation of the reliability index from the exact value. 2. Methodology ANN and sample reduction techniques are wildly used in structural reliability research. We use three hidden layers ANNs to approximate the limit state function and take two methods to sample in random variables. 2.1. ANN In the ANN, the neuron is an element that processes several inputs to one output, so this property can be described as the dot product of two vectors, while the layer which contains multi-neurons is equal to the dot product of two matrixes and given as the function (1): ( ) T T = + Z X W B (1) where W is called weight related to the pre-layer neurons' importance, and B is called bias, a corrective term, to avoid neurons vanishing gradient especially when the input vector X is equal to zero, Z is an intermediate vector called activation, the independent variable of the transfer function (2): ( ) ( ) Re max 0, lu = Z Z (2) whose results are the inputs of the next hidden layer. The whole neural network is indicated in Fig.1.
Made with FlippingBook Annual report maker