PSI - Issue 54

Arvid Trapp et al. / Procedia Structural Integrity 54 (2024) 521–535 Arvid Trapp / Structural Integrity Procedia 00 (2023) 000–000

526

6

of the PSD by the non-stationary characteristics that manifest in fourth-order (Eq. 4). Linear systems respond to this proportionally and the NSM of responses M yy ( f 1 , f 2 ) are simply calculated by multiplication with the squared transfer function M yy ( f 1 , f 2 ) = | H xy ( f 1 ) | 2 M xx ( f 1 , f 2 ) | H xy ( f 2 ) | 2 . Incorporating the NSM complements the statistical characterization of responses µ 2 , y (via PSD) with µ 4 , y (via NSM). This means that for linear systems response kurtosis β y can be estimated via Eq. (4) statistically on the basis of its frequency-domain characterization — without the extensive processing of time-domain realizations. To briefly provide a more detailed background on the NSM: The NSM can be obtained either by applying correlation theory to STP (spectrogram) analysis (Trapp and Wolfsteiner (2021b)) or by framing a subset of the fourth-order trispectrum S xxxx ( f 1 , f 2 , f 3 ) (Trapp and Wolfsteiner (2023b); Nikias and Petropulu (1993)) that is related to low-frequency modulation. More specifically, it should be interpreted as a simplification of the trispectrum, which provides the full spectral decomposition of µ 4 resp. kurtosis β for general random processes. For low-frequency varying processes that are fundamentally Gaussian, the NSM captures the same fourth-order moment as the trispectrum. However, very abrupt changes, distinct deterministic frequency components, and nonlinear influences that may be included in the trispectrum, may not appear in the NSM. The NSM M xx ( f 1 , f 2 ) has two advantages over the trispectrum S xxxx ( f 1 , f 2 , f 3 ), which are that its characterization requires one argument less than the trispectrum and that these arguments provide clear interpretation. The NSM spectrally decomposes all contributions to µ 4 caused by a low-frequency non-stationary evolution for the frequency arguments f 1 and f 2 . The diagonal values f 1 = f 2 indicate non-stationarity for frequency intervals, while o ff -diagonals f 1 f 2 indicate whether these e ff ects are correlated for f 1 and f 2 in time. This means the NSM shows whether potential modes are excited synchronously or not, whose e ff ect will be discussed in the upcoming section. The NSM is implemented in an open source Python package, pyRaTS (Trapp and Wolfsteiner (2023a)). An artificial neural network (ANN) is a mathematical representation that simulates the structure and function of biological neurons in the human nervous system (Bishop (1995)). The learning capabilities of such models are popu larly used to predict the functional dependencies between in- and output variables. As such, a neural network consists of one in- and output layer, of which the number of neurons align with the number of in- and output parameters. In between those layers are hidden layers. The number of hidden layers and neurons within are application dependent, and methods for obtaining the optimal configuration remain an area of ongoing research. Hidden layers and their as sociated neurons, once trained, capture specific dependencies within the data. However, adding an excessive number of neurons to a hidden layer or creating too many hidden layers can result in the model learning data features that are not truly part of the underlying relationship. Instead, the model learns noise and random fluctuations, ultimately diminishing overall performance on unseen data — a phenomenon recognized as overfitting. Methods aimed at pre venting overfitting during training, typically consist of increasing the size of the training data or decreasing the size of the networks trainable parameters, although this latter option is rarely adopted given that larger networks generally have greater potential to be more powerful. A commonly used alternative is regularization, which in neural networks involves either incorporating additional terms in the loss function to encourage the network to learn smaller weight values or using dropout methods to adjust the network itself. In ANN each neuron follows a simple relation between the input and the output: y = ϕ n i = 1 ( w i x i ) + b , where each connection between the neurons is equipped with a weight w i representing the strength of the connection. For each input x i , the neuron sums the products of x i and w i and adds a bias b to the sum. The weighted real-valued inputs are then processed through a nonlinear activation function ϕ [ · ]. This is essential for the network to be capable of learning non-linear relationships. Without the non-linearity intro duced by the activation function, a neural network would simply generate outputs as a linear function of its inputs, despite having multiple layers in place. The learning process uses a technique called backpropagation which adjusts the weights of the hidden layers, such that the output error of the loss function is reduced. Starting with random weights, the input information is fed forward through the network and the networks weights are iteratively adjusted in the opposite direction of the gradient of the loss function with respect to those weights minimizing the error with each iteration (Awad and Khanna (2015)). 2.1. Artificial neural networks (ANN)

Made with FlippingBook. PDF to flipbook with ease