PSI - Issue 78
Ivan Roselli et al. / Procedia Structural Integrity 78 (2026) 128–136
132
Fig. 2. Workflow of the implemented AI procedure: I and Î indicate the input data and the output reconstructed by CVAE, respectively; z is the latent variable; E is the expected value of z ; μ is the mean value of z ; σ is the variance of z; D is the divergence; N is the Gaussian distribution of z; is the noise of N. This is due how neural networks work. At the same time, according to Ma et al. (2020) and Römgens et al. (2024) samples must have same length L such that: >2×( ) 2 (2) where f c is the sampling frequency of data (i.e. 200 fps). In the present study, each recording was segmented in time samples of 0.64 s (128 frames), which implies that for each recording time history (about 2 minutes) we have 77 total samples. However, at the beginning and at the end of each recording the elliptical filter generates artifacts. Consequently, those initial and final samples were discarded. The above samples were randomly partitioned in train and test datasets in proportions of 70% and 30%, respectively. The data were then normalized using a standard scaler that eliminates dependence on the processed signal intensity by removing the signal mean value and scaling the intensity to unit variance. Further details on data normalization are reported in Palumbo et al. (2025). After pre-processing, data are ready for input to the CVAE module. Convolutional neural networks are the basis for CVAE generative models, which have the capability to catch the spatial and temporal relations between experimental data (De Angelis et al. (2024); Palumbo et al. (2025)). CVAEs are a combination between traditional convolutional auto-encoders and probabilistic methods, learning a latent representation characterized by a given probability distribution, e.g. Gaussian type. Consequently, CVAEs are able to generate new data by sampling from the latent space. In practice, CVAEs essentially comprises an encoder and a decoder implemented by means of convolutional networks. The encoder converts the input data into a probabilistic latent representation, with mean value µ and variance . The decoder part is used to generate artificial data according to the latent representation of the input data. CVAE is also characterized by a loss function, which comprises two parts: one tries to minimize the approximation of the input data and the other verifies the latent space distribution deviation from a reference distribution (regularization). This regularization is used to have a more generalized function, which makes the CVAE method mostly suitable for the generation of images, signals, and other sequential data (Ning and Xie (2024); Higgins et al. (2017); Ahmed and Longo (2020)). A Gaussian-distributed noise, indicated by in the following, is introduced into the latent space to permit the reparameterization trick, which makes the latent sampling differentiable and compatible with back propagation (Pollastro et al. (2023); Kingma and Welling (2014)).
Made with FlippingBook Digital Proposal Maker