PSI - Issue 72

Stefan Hildebrand et al. / Procedia Structural Integrity 72 (2025) 520–528

522

2.Methods 2.1. Conventional mechanical modeling of cyclic hardening

The present paper considers the classical framework for plasticity starting with the additive decomposition of strain ε = ε e +ε p into an elastic (ε e ) and a plastic part (ε p ) where the latter has a deviatoric character tr{ε p } = 0which goes back to the theory of dislocation motion Oliveira and Penna (2004). The stresses and elastic strains are connected by Hooke’s elasticity law according to   : : e p          (1) The modeling of kinematic hardening is crucial for modeling cyclic behavior and implementation of Bauschinger and ratcheting effects manifesting through the change of yield limit over the number of loading cycles Aygün et al. (2021); Dettmer and Reese (2004). The yield limit in this case takes the form     1 0, 1, , M p Y i i i M M                 (2) where the yield condition Φ ≤ 0 determines physically admissible states, χ represents back stresses and is a purely deviatoric quantity (tr {χ i } = 0 ). Isotropic hardening is incorporated into the yield limit σ Y as a function of equivalent plastic strain ε p Cazacu et al. (2019)     ,0 2 , , 3 p p p p p p p Y Y R d d d d                (3) p d d       (4) A number of alternative approaches is available for the extension R(εp) to inocorporate isotropic hardening Suchocki (2022). The evolution of plastic strains is assumed as associated to the plastic flow potential Altenbach and Öchsner (2018)

2 3

p

p

(5)

d

Cd

d     

i 

i

i

i

where, dλ is the plastic multiplier calculated such that Φ = 0 is fulfilled for a plastic step. The evolution of the back stresses is given by a hardening model, an example being the Armstrong-Frederick equations Frederick and Armstrong (2007) with material parameters C, γ i .

2.2. Physics-informed neural network training

In the current contribution, Fully Connected Neural Networks (FCNN) are used as universal approximators Hornik et al. (1989). For training, gradient descent based optimization algorithms like Adam Kingma and Ba (2014) are used, based on the Mean Square Error (MSE) loss which typically yields robust and accurate results       2 1 1 1 1 MSE N x m k k x i x i x x k i R P R P N m      (6)         MSE , , , k k k k data x p p l            (7)

Made with FlippingBook Annual report maker