PSI - Issue 80

Sadjad Naderi et al. / Procedia Structural Integrity 80 (2026) 77–92 Sadjad Naderi et al. / Structural Integrity Procedia 00 (2025) 000–000 15 Fig. 9 presents the predicted – relationships with associated 95% confidence intervals, generated by GPR using the same online dataset as in the DBN framework, streamed sequentially from 5 to 22 iterations. Computationally, GPR demonstrates substantial speed advantages, requiring an average of 886.5 ms per iteration on a standard M1 MacBook Pro, compared to DBN inference which typically requires several seconds per update. The computational breakdown reveals 875.6 ms for model fitting and only 10.9 ms for predictions, enabling near real-time forecasting capabilities. GPR exhibits limited extrapolation capability in early iterations, failing to reach the critical crack length ( " = 15 mm) until iteration 16 with 16 data points (73% of available observations). In contrast, DBN schemes achieve reliable extrapolation from iteration 5 onwards due to physics-based constraints. Once sufficient data accumulates, GPR predictions stabilise rapidly with final # converging to 136k cycles (±712 cycles standard deviation) over the last three iterations, closely matching DBN Scheme I predictions of 137k cycles and Scheme III predictions of 136k cycles. The confidence interval evolution reveals GPR’s data-dependency limitations: initial successful extrapolations show wide uncertainties (±22.7% at iteration 16), narrowing to ±5.3% by iteration 22. This contrasts with DBN approaches that maintain consistent uncertainty quantification throughout the updating process (±1.6% for Scheme I, ±2.1% for Scheme II). The coefficient of variation for GPR final predictions (0.0052) indicates high precision once convergence is achieved, though this requires 73% of the available data compared to DBN convergence at ~45% data availability. The analysis demonstrates a fundamental trade-off: GPR provides computational efficiency (~0.9 seconds vs. several seconds per iteration) and matches final prediction accuracy when sufficient data are available, but lacks the extrapolation robustness and early-stage reliability of physics-constrained DBN approaches. For DT applications requiring continuous updating with limited initial data, physics-based frameworks offer superior predictive capability, while GPR excels in scenarios with dense historical datasets and computational speed requirements. 5. Conclusion and perspectives This study demonstrates a DT architecture integrating multi-source data fusion with physics-informed Bayesian prognosis for metallic aircraft structures operating under damage-tolerant design principles. Two key enabling technologies are highlighted: (i) a data fusion strategy combining offline inspection data with online crack quantifications, and (ii) a transfer learning mechanism enhancing model adaptability across operational conditions. The framework addresses the challenge of reliable lifetime predictions given the scarcity of high-quality prognostic data, particularly when the data acquired locally and globally around the damage might not be sufficient to tell the whole story about the fatigue damage/crack progression. The data augmentation approach demonstrates potential for leveraging existing maintenance databases while enhancing predictive accuracy through continuous monitoring. Physics-based modelling constraints ensure predictions remain within physically admissible bounds while reducing dependency on extensive training datasets. However, computational overhead and physics model generalisability present limitations that require careful consideration in deployment scenarios. Comparative analysis with GPR reaffirmed the critical importance of data quality in fatigue crack growth prediction, where “quality” corresponds to sufficient measurements in high crack-growth-rate regimes. For applications where back-calculation is not required, GPR may offer computational advantages over certain Bayesian approaches, though this remains dependent on data availability and comes at the cost of limited ability to explicitly represent parametric uncertainties (e.g. material variability). Regarding future work, two development directions will be pursued. The first involves prognosis model refinement given the current findings, where a robust probabilistic data-driven approach is required with reduced dependency on detailed physics models, thereby decreasing sensitivity to specific structural geometries, material properties, loading scenarios, and environmental conditions while addressing computational cost constraints. The second is the development and integration of optimisation and decision-making modules as shown in Fig. 1. The key technical consideration lies in the philosophical design of digital twins by incorporating human roles as integral system components rather than passive end-users . Traditional semi-automated DTs mainly serve to provide information to humans at the decision-making level, but the outcomes of human decisions and subsequent actions often remain disconnected from the digital system . This creates loose coupling between physical and digital worlds without proper feedback loop closure.

91

Made with FlippingBook - Online catalogs