PSI - Issue 77

Hugo Mesquita Vasconcelos et al. / Procedia Structural Integrity 77 (2026) 601–610 Hugo Mesquita Vasconcelos/ Structural Integrity Procedia 00 (2026) 000–000

608

8

the rescue and pleasure craft classes remained poorly represented in predictions, as seen in Table 1. When compared with the two preceding configurations, this model demonstrated a slightly higher precision and recall for the majority of classes and achieved the highest macro-averaged precision, recall, and F1-score among the three, as seen in Table 2.

Table 1. Comprehensive per class analysis of test results. Precision Recall

F1

Traditional

2 stage 0.9754 0.6901 0.6207 0.4296 0.2381 0.6202

MHC

Traditional 2 stage MHC

Traditional 2 stage MHC

1s samples

background

0.9629 0.6601 0.6788 0.4672 0.2375 0.6269

0.9644 0.7034

0.9036 0.8645 0.6638 0.1219 0.3722 0.7326

0.8978 0.8205 0.6965

0.9246

0.9323 0.7486 0.6712 0.1933

0.935

0.9441 0.7374 0.6776 0.2047 0.2793 0.6252

39956

cargo

0.775

0.7497 0.6564 0.1925 0.2897 0.6793

5795 1410 1403

dredger fishing

0.669

0.6865 0.1354 0.3361 0.7846

0.4194

0.124

other

0.239

0.3701

0.29

970

passengership

0.5196

0.751

0.6757

2498

pilotvessel

0.0 0.0

0.0

0.0 0.0

0.0 0.0

0.0

0.0 0.0

0.0 0.0

0.0

0.0 0.0

25 73

pleasurecraft

0.005

0.0137 0.0291

0.0073 0.0379

rescue sailing tanker

0.0187

0.0543

0.0392

0.0116

0.0116

0.0143

0.0179

172

0.0

0.0

0.0

0.0

0.0

0.0

0.0

0.0

0.0

73

0.3469 0.6466

0.2512 0.6329

0.1378 0.6779

0.6133 0.6845

0.5967 0.7129

0.558

0.4431

0.3535 0.6705

0.221

181

tug

0.6696

0.665

0.6738

12719

Table 2. General performance metrics of the three different training approaches. Traditional Two-stage

Full MHC

Time (h)

9,5

9,9

9,4

accuracy

0.816

0,815

0.82

Precision (weighted) Recall (weighted)

0.3871 (0.827) 0.414 (0.815) 0.386 (0.817)

0.3765 (0.832) 0.418 (0.815) 0.381 (0.8192)

0.364 (0.832) 0.407 (0.820) 0.365 (0.822)

F1 (weighted)

4. Discussion Efficient feature extraction in acoustic signal recognition can be accessed by an improvement in either energy spent or classification accuracy, while maintaining the same underlying experimental conditions. This means using the same dataset, model architecture, and hyperparameter configuration to ensure that any observed differences are attributable to the learning approach itself. In this research, a more theoretical direction was taken, aiming to analyze the behavior of the proposed method rather than to obtain the best possible classifying model. Therefore, no hyperparameter tuning was performed, and no unbalanced classes were removed. An earlier assessment without oversampling yielded worse results in accuracy for the full MHC configuration from 0.76 to 0.82, although this came at the cost of approximately three times longer training time for the oversampled approach. Although oversampling improved the overall metrics, it also appears to have accelerated memorization. In the oversampled single β head approach , training accuracy reached approximately 98 % by the second epoch, while in the non-oversampled the accuracy increased much more gradually. This rapid convergence possibly indicates that the model was fitting the training data too quickly. Although oversampling wasn’t performed by reusing the same

Made with FlippingBook flipbook maker