PSI - Issue 72

Stefan Hildebrand et al. / Procedia Structural Integrity 72 (2025) 520–528

523

Here, x = tr, te indicate the training and testing data set, respectively and m represents the NN output dimension. T tr denotes the training data set and T te the testing data set while Nx denotes the respective data set size. Furthermore ke denotes purely elastic steps, while kp denotes only plastic steps. The data-driven loss value is en hanced and regularized Raissi et al. (2019) by physics-informed qualitative knowledge in the form of additional loss contributions Hildebrand and Klinge (2024b). These allow the NN to guide remaining prediction uncertainties based on the requirements of the deviatoric character of ε p and χ         _ _ MSE tr , , MSE tr , k k epsp trace x p chi trace x l l     0 0   (8) compliance with the von Mises yield criterion since positive values of the yield function are not admissible       , MSE max 0, , MSE max 0, , k k k e e e flow elastic x x Y l               0 0   (9)   , MSE , MSE , k k k p e e flow plastic x x ML Y l               0 0   (10)

the Karush-Kuhn-Tucker conditions on the plastic multiplier and the flow rule,     , , MSE , MSE , k k c c epsp elastic x p chi elastic x l l       0 0  

(11)

_ data sig cpsp trace chi trace flow elastic flow plastic epsp elastic chi elas tic assoc plastic l l l l l l l l l l           _ , , , , _

The associativity of the flowrule, however, has been determined not to influence the result quality and stability significantly. All loss contributions are combined as a sum to calculate the overall loss value

2.3. Neural Network architecture

The network is set up as a plain and stateless Fully Connected Neural Network (FCNN). This allows for a significant reduction in the number of NN parameters and a vast reduction in the required amount of training epochs compared to stateful approaches Logarzo et al. (2021). The input and output quantities are similar to the ones of the conventional Radial Return Mapping algorithm which makes the network more explainable compared to the architectures where such quantities are implicitly learned as hidden states. The NN is provided with the current strain state as well as the plastic strain and sum of back stresses from the previous time step as inputs to characterize the previous plastic state of the materal. The setup then trains the NN to output the increments of plastic strains ∆εp and back stresses ∆χ. Afterwards, the target quantities are determined as a postprocessing step by

1

1 0 , p  

k p

p       

k p

 0

(12)

n

1

1

1

k

k

k p

(13)

  

c

1

1 k G G E    2 ,

k

 / 2 1

 

(14)

1 k k k         

1

(15)

Made with FlippingBook Annual report maker