PSI - Issue 70
Rachit Sharma et al. / Procedia Structural Integrity 70 (2025) 386–393 Sharma and Laskar/ Structural Integrity Procedia 00 (2025) 000 – 000
389
4
f c (MPa) f (%) E f (GPa) V exp. (kN)
Table 2. Distribution of Input Parameters. Parameter Index b (mm) h (mm) a/d
L (mm)
Mean
322.76 267.45 3.56 2262.08 44.57 290.95 161.56 1.70 1019.73 15.87
1.13 0.71 0.11 4.12
70.16 41.17 24.80
93.20
St. Dev.
118.97
min max
89.0 1854
73.0 1111
0.50 900.0 9.60 7315
19.2
6.45
102.0
148
1140.60
4. Overview of ML Techniques 4.1. Support Vector Regressor (SVR)
SVR is supervised machine learning algorithm designed to map input data into a higher-dimensional space, enabling linear separation through the application of a kernel function. For a given training dataset, 1 ( , ) N Q i i i x y = R R , with N observations, where 1 , , Q Q i i i x X x x R = are Q input attributes and i y Y R as output variable, the algorithm estimates decision operator ( f(x) ) which allows peak ε deviation at each point from response value in training set (refer to Equation 1), while maintaining minimal complexity to prevent overfitting.
(1)
' ,
i
( )
(
) ( , ) K x x b subject to +
[0, ] C
f x
=
−
i
i
i
i
i
i SV
Where ( , ) i K x x is kernel function (linear, polynomial, sigmoid and radial basis function (RBF)), C serves as regularization parameter, b denotes bias, i , and ' i denotes Lagrange multipliers corresponding to lower and upper support vectors, respectively. 4.2. Extreme Gradient Boosting (XGBoost) The XGBoost algorithm sequentially combines weak learners to form a strong predictive model. Unlike other boosting techniques, complexity and overfitting of objective function is lowered via regularization term. Loss function is minimalized at each iteration (t) using a loss ( L ) and regularization term (⋅) given by (Equation 2 and 3), respectively. ( ) ( ) 1 1 , ( ) t N t i i i i i L y y f = = + (2 )
1 2
2 = 1 T j j
( ) f = + T
(3)
j as weight corresponding to leaf j , and is the penalty parameter.
Where leaf complexity is γ ,
4.3. Hyperparameter Tuning and Cross-Validation Hyperparameters are configurations or set of parameters required to define the machine learning process (Hutter et al. (2019)). The optimal hyperparameters are determined using the GridSearch tuning technique, with a 10-fold cross validation method employed to mitigate overfitting. Dataset is split into sets of 80% training and 20% testing. The training dataset is then partitioned into K equally sized, non-overlapping folds, ensuring each observation is used for both training and validation. The model is iteratively trained on ( K – 1) folds, with the remaining fold serving as the validation set. This process is repeated ( K ) times, yielding multiple performance indices, from which the final model parameters are derived as the mean. The optimized hyperparameters for the SVR and XGBoost models are presented in Table 3.
Made with FlippingBook - Online catalogs