PSI - Issue 79

Marco Piacentini et al. / Procedia Structural Integrity 79 (2026) 394–403

398

σ = σ n (1 + e ). The logarithmic strain was computed as ϵ = ln(1 + e ). For simplicity, the absolute values of σ and ϵ were used in subsequent analyses. The e ff ective Young’s modulus E was evaluated as the initial tangent modulus E = σ/ϵ , while the e ff ective strength σ y was determined using the 0.2% o ff set method.

Table 1. Material Properties, CLEAR resin, Formlabs. Young’s Modulus ( MPa ) Poisson’s ratio

Yield stress ( MPa ), tabular

Yield strain, tabular

2200

0.3

28 66

0.0

0.26

2.3. Deep Learning

Multi-Layer Perceptrons (MLPs) were employed to predict the e ff ective mechanical properties from global structural parameters. This choice reflects their widespread use for correlating geometrical descriptors with mechanical response and their successful integration into inverse design frameworks Zheng et al. (2023b); Liu et al. (2021). Architectures and hyperparameters were initially inspired by MLP-based sti ff ness predictors in the literature—such as in Padhy et al. (2024); Kumar et al. (2020)—and iteratively adapted to the present framework. Three MLPs were trained and evaluated (Fig. 4). The layers consists of a matrix multiplication and bias addition, followed by a Rectified Linear Unit (ReLU) acting as nonlinear activation function, enabling the network to approximate arbitrary functions when su ffi cient hidden neurons are present. All MLPs produced two outputs—e ff ective sti ff ness and e ff ective strength. The MLPs mainly di ff ered by the type of input they received: (1) MLP-1 received the three global generation parameters; (2) MLP-2 received the flattened list of node coordinates, zero-padded to 5% above the maximum dataset size; (3) MLP-3 received a concatenation of the global parameters and node coordinates. Both inputs (features) and outputs (targets) were standardized to zero mean and unit standard deviation to improve numerical stability, with the original statistics stored for consistent normalization and rescaling. Nodal coordinates were normalized collectively. Nodes were ordered by vertical position, and in case of ties, by horizontal position. This ordering improves consistency for similar structures but reduces robustness, making predictions dependent on node ordering and number. However, MLPs are otherwise poorly suited for processing such inputs due to their lack of permutation invariance. Indeed, the inclusion of coordinate-based MLPs in this work serves to illustrate these limitations. Model parameters were optimized by minimizing the Mean Squared Error (MSE) using the Adaptive Moment Estimation (ADAM) optimizer. The dataset was randomly split into training (80%), validation (10%), and testing (10%) subsets. Training was conducted for 100 epochs with an initial learning rate of 4 · 10 − 4 , reduced by a factor of 0 . 87 every 10 epochs. With a batch size of 64, validation was performed every 1 250 iterations, and the model

Fig. 4. Schematics of the three developed Multi-Layer Perceptron architectures.

Made with FlippingBook - Online catalogs