PSI - Issue 54

Arvid Trapp et al. / Procedia Structural Integrity 54 (2024) 521–535

531

Arvid Trapp / Structural Integrity Procedia 00 (2023) 000–000

11

T 1 T 2 . . . T R

    f D

   

    ,

    f D

    , · · · ,

    f D

f D 1 , 1 f D 2 , 1

f D 1 , 2 f D 2 , 2

f D 1 , R f D 2 , R

. . .

. . .

. . .

H xy 1 ( f ) H xy 2 ( f ) . . . H xy R ( f )

N modes 1 N modes 2 . . . N modes R

Number of PSD’s R

( N modes 1 , 1 )

( N modes 2 , 2 )

( N modes

, R )

R

   

   

, · · · ,    

   

   

,    

ζ 1 , R ζ 2 , R

ζ 1 , 1 ζ 2 , 1

ζ 1 , 2 ζ 2 , 2

. . . ζ ( N modes R , R )

. . . ζ ( N modes2 , 2)

. . . ζ ( N modes1 , 1)

G xx 1 ( f ) G xx 2 ( f ) . . . G xx R ( f )

σ 1 σ 2 . . . σ R

Fig. 7. Schematic representation of the data generation using random numbers; each sample is fully defined by the time segments T 1 , T 2 , ..., T R , the transfer functions H xy 1 ( f ), H xy 2 ( f ), · · · , H xy R ( f ) and the PSDs G xx 1 ( f ), G xx 2 ( f ), · · · , G xx R ( f )

Train on Dirlik

( qsDK ) eq

( ratio ) eq

s ( DK )

ˆ s

eq , s

G yy 1 ( f ) G yy 2 ( f ) . . . G yy n ( f )

G xx 1 ( f ) G xx 2 ( f ) . . . G xx n ( f )

Λ 0 , Λ I 1 , Λ I 2 , Λ Θ 1 , Λ Θ 2

M ( Train ) yy

( f 1 , f 2 )

H xy 1 ( f ) H xy 2 ( f ) . . . H xy n ( f )

M yy 1 ( f 1 , f 2 ) M yy 2 ( f 1 , f 2 ) . . . M yy n ( f 1 , f 2 )

M xx 1 ( f 1 , f 2 ) M xx 2 ( f 1 , f 2 ) . . . M xx n ( f 1 , f 2 )

Λ 0 , Λ I 1 , Λ I 2 , Λ Θ 1 , Λ Θ 2

M ( Test ) yy

( f 1 , f 2 )

Fig. 8. Workflow for generating in- and output data for training and testing

When training the network, the training data was split into three subsets: training, validation and test set. The train ing set serves to adjust the network weights and biases and to calculate the gradients; the validation set is used to detect whether the network is overfitted or simply memorizes the function relationship between the standard in- and output data during training. And the test set carries out independent tests to evaluate the networks performance. The input data is scaled to fall within a range of 0 and 1 using the min-max scaling formula X scaled = X − X min X max − X min . This adjustment helps to maintain smaller values for the learned weights during the training, consequently enhancing the overall opti mization process and leading to better over all performance of the ANN model. After experimenting with various data proportions, the portions of data finally used for training, validation and testing were 50000, 10000 and 5000 (77-15 8%) respectively. The ANN was modeled by the Tensorflow toolbox for Python (TensorFlow Developers (2023)). The training was carried out using the widely-used Adam backpropagation algorithm, a variant of the stochastic gradient descent method. It is known for its computational e ffi ciency and minimal memory requirements. Furthermore, it is

Made with FlippingBook. PDF to flipbook with ease