PSI - Issue 80
Mengke Zhuang et al. / Procedia Structural Integrity 80 (2026) 299–309 Author name / Structural Integrity Procedia 00 (2019) 000 – 000
300 2
1. Introduction Uncertainties are inherent in almost all engineering problems, making reliability analysis essential for estimating the probability of failure. However, the computational cost of evaluating such probabilities becomes prohibitively high when complex geometries and multiple failure modes are involved. As modern structures grow increasingly sophisticated, the limit state function (LSF) can only be evaluated through numerical methods such as the Finite Element Method (FEM) or Boundary Element Method (BEM), resulting in computationally expensive evaluations. Furthermore, structures in aerospace applications are typically designed with high safety margins, requiring extensive simulations to estimate the correspondingly small failure probabilities Lee et al. (2022). Therefore, the primary objective of reliability analysis is to minimize LSF evaluations while maintaining high accuracy in failure probability estimation Su et al. (2020). To address these computational challenges, numerous methods have been developed. First-order reliability method (FORM) Ditlevsen and Madsen (1996) approximates the LSF using linear approximation; however, this approach fails when dealing with highly nonlinear LSFs, leading to the development of second-order reliability method (SORM) Lemaire (2013) to capture nonlinear effects. While these methods demonstrate reasonable accuracy in many applications, they struggle with high dimensional problems and complex LSFs. Conversely, Monte Carlo Simulation (MCS) provides a robust and universally applicable benchmark for failure probability estimation but suffers from computational inefficiency, particularly for small failure probability problems. To overcome these limitations, two main approaches have emerged: variance reduction techniques and metamodel-based methods. Variance reduction methods aim to reduce the variance of failure probability estimates in MCS through advanced sampling techniques, including importance sampling (IS), subset simulation (SS), and weighted sampling (WS). A comprehensive review of existing sampling methods is provided by Lee et al. (2022). Alternatively, metamodeling methods create computationally efficient surrogates to replace expensive LSF evaluations. Common surrogate models include Kriging (Gaussian process), support vector machines (SVM), and artificial neural networks (ANN). Detailed reviews comparing these meta-modelling approaches are available in the literature Teixeira et al. (2021) and Hoole et al. (2020). Among these, Kriging models have gained widespread adoption in reliability analysis due to their ability to provide both predictions and associated uncertainty estimates at unsampled locations Kaymaz (2005). Recent advances have focused on combining variance reduction techniques with meta-modelling approaches through adaptive sequential sampling strategies. Following the framework introduced by Echard et al. (2011), the AK-MCS (Active learning reliability method combining Kriging and Monte Carlo Simulation) method was developed, which actively selects training points based on the U-learning function Echard et al. (2011). This approach prioritizes regions that significantly influence failure probability estimation rather than exploring the entire design space uniformly. Subsequently, Echard et al. (2013) proposed AK-IS, integrating Kriging with importance sampling to further reduce LSF evaluations. These developments represent a shift from passive simulation methods to adaptive strategies that strategically allocate computational resources near the limit state boundary. The effectiveness of such methods depends critically on the design of active learning functions and appropriate stopping criteria for metamodel refinement. A comprehensive review of active learning functions and stopping criteria is presented by Moustapha et al. (2022). It is worth noting that the majority of existing surrogate-based reliability methods rely on single-fidelity models, where all training data comes from the same computational model. Multi-fidelity approaches remain relatively unexplored in the reliability analysis literature. Advanced methods have recently incorporated multi-fidelity (MF) models to further reduce computational costs by leveraging low-fidelity (LF) models for global exploration and high-fidelity (HF) models for local refinement. The Adaptive Kriging-assisted multi-fidelity subset simulation (AK MFSS) Dai et al. (2025) builds upon subset simulation by evaluating only the most critical conditional probabilities with HF models, while employing LF-trained Kriging surrogates for remaining subsets. The multi-fidelity efficient global reliability analysis (mfEGRA) Chaudhuri et al. (2021) extends the single-fidelity EGRA framework of Bichon et al. (2008) by incorporating multi-fidelity Gaussian process surrogates with a two-stage active learning strategy. This method selects training points using the Expected Feasibility Function (EFF) Bichon et al. (2011) and determines fidelity levels through cost-weighted information gain metrics. Similarly, the adaptive multi-fidelity Gaussian process reliability analysis (AMGPRA) Zhang et al. (2022) employs a unified learning function that simultaneously determines both sampling location and fidelity level. The Active-learning Multi-Fidelity Gaussian
Made with FlippingBook - Online catalogs