PSI - Issue 70
R. Ashwathi et al. / Procedia Structural Integrity 70 (2025) 424–431
429
( +1) = ( 1 ( +1) + 2 ∑ ( ( ) , ( ) , ) ( ) ) Where, N (i) indicates the group of neighbor nodes i, defines the function that passes the node level messages and edge level messages, 1 and 2 are the learnable parameters, denotes the ReLu activation function. After passing the information via l layers, the final graph embedding is done using the following sum pooling function = ({ ( ) | }) where readout denotes sum pooling, represented by ∑ ( ) The final strength prediction is given by ̂ : ̂ = ( ) where signifies the forward propagation of the multi-layer perceptron, to infer the significant differences between predicted values and ground truth annotations. GCL blends the information from neighbor nodes in order to track the local patterns, relationships and dependencies between the SCM and several concrete properties (Van Dao Hai-Bang; Vu, Huong-Lan Thi; Le, Tien-Thinh; Pham, Binh Thai, 2020). An attention layer is added to assign weights for the nodes with values closer to higher strength., thus identifying the most relevant and less relevant nodes (Khan and Javed, 2023). Pooling layer combines the individual node representations and generates a feature vector denoting the concrete properties across different types of SCM. The output layer maps the feature vector to the corresponding target properties. The layers used in the GNN architecture are summarized below:
Table 4. Types of layer Type of layer
Input (N,F) (N,F) (N,H) (N,H) (N,O)
Output
Input layer
-
GCNCOnv layer 1
Feature extraction
ReLU
(N,H) (N,O)
GCNConv layer 2
Output layer Strength prediction Where N - nodes, F - input features, H - hidden node dimensions, O - output dimension 3.3. GNN model training:
The proposed model is implemented using the PyTorch Geometric library. The data set is split into a training set (70%), testing set (15%) and validation set (15%). The steps involved in model training are summarized in the following Fig.4.
Table 5. Comparison of results GNN Model GNN with 2 hidden layers 2.72 GNN with 3 hidden layers 3.53 MSE
MAE
RMSE
R 2
0.88 0.91 0.82
1.65 1.88 1.09
0.79 0.83 0.98
GCNConv
1.18
Made with FlippingBook - Online catalogs