PSI - Issue 66

Andrii Kompanets et al. / Procedia Structural Integrity 66 (2024) 388–395 Author name / Structural Integrity Procedia 00 (2025) 000–000

394

7

4. Results Table 1 shows the performance of our neural network for crack segmentation compared to the method proposed by Konig et al. (Konig, 2021) which we reimplemented. While the original method by Konig et al. achieves 71.55 ±1.27 % on the test patches of the CSB dataset, we can observe that the proposed improvements to the neural network produce significant performance improvements of about 8% reaching 79.63 ±1.86 % of the F1 -score. The right side of Table 1 highlights the primary challenge in segmenting cracks in images of steel bridges. It shows the performance of neural networks trained on the patches and tested on full test images. While recall remains consistent with the results from patch-based testing, precision drops by nearly half for both models. According to Eqs. (1) and (2), this discrepancy between recall and precision suggests a high rate of false positives. This is further illustrated in Figure 3, which displays the segmentation outputs for three test images from the CSB dataset. In the figure, the left and middle columns present output maps for an image that is relatively easy to segment and one with many crack-like features, respectively. The right column shows an image without any cracks. These examples demonstrate the false positive issue, where neural networks often mistake features such as structural edges, shadows, and inspector markings for cracks. This issue is prevalent across most of the test images in the CSB dataset. The best overall performance on full images was achieved by the proposed neural network, with an F1 score=60.29 ±4.00 %. However, the high rate of false positives remains a significant problem, resulting in precision

values below 50% and a 25% gap between precision and recall. Table 1: Performance of the proposed method for crack segmentation Test on patches

Test on entire images

Method

Pr

Re

IoU

F1

Pr

Re

IoU

F1

Konig et al.

71.98 ±3.16 81.10 ±0.66

71.03 ±2.26 78.24 ±2.96

55.72 ±1.55 66.19 ±2.54

71.55 ±1.27 79.63 ±1.86

33.22 ±2.24 49.43 ±5.60

73.15 ±1.73 74.492 ±7.20

29.56 ±1.61 43.22 ±3.83

45.61 ±1.93 60.29 ±4.00

Ours

5. Conclusion In this work, we propose a neural network for the segmentation of cracks in images of steel bridges. We improved the existing encoder-decoder-based method by utilizing a recent ConvNext neural network architecture as an encoder and combined with other deep-learning techniques such as attention modules and stage-wise learning rate decay we improved the F1 -score by almost 7% reaching 79.63 ±1.86 %. Furthermore, evaluation of performance on entire images allowed us to identify the main challenge of the segmentation of cracks in images of steel bridges, namely, a large amount of false-positive segmentations which are caused by crack-like features in the background of the images of steel bridges. Andrii Kompanets, Davide Leonetti, Remco Duits, and Bert Snijder. Cracks in Steel Bridges (CSB) dataset. 4TU.ResearchData, 2024. https://doi.org/10.4121/6162a9b6-2a20-4600-8207-e9dcd53a264a Andrii Kompanets, Remco Duits, Davide Leonetti, Nicky van den Berg, and H. H. Snijder. Segmentation tool for images of cracks. In Sebastian Skatulla and Hans Beushausen, editors, Advances in Information Technology in Civil and Building Engineering, pages 93–110, Cham, 2024. Springer International Publishing. doi:https://doi.org/10.1007/978-3-031-35399-4 8. Remco Duits, Stephan PL Meesters, J-M Mirebeau, and Jorg M Portegies. Optimal paths for variants of the 2d and 3d reeds–shepp car with applications in image analysis. Journal of Mathematical Imaging and Vision, 60:816–848, 2018. Crack segmentation tool. https://github.com/akomp22/crack-segmentation-tool, 2023. Accessed: 2023-11-21. Liang Xu, Han Zou, and Takayuki Okatani. How do label errors affect thin crack detection by dnns. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4413–4422. IEEE, 2023. doi:https://doi.org/10.1109/CVPRW59228.2023.00464. 6. References

Made with FlippingBook Ebook Creator