PSI - Issue 70
Anjireddy Mummadi et al. / Procedia Structural Integrity 70 (2025) 417–423
421
6. Discussion and Interpretation
The current study examines the performance of Faster R CNN and Mask R CNN in bolt/bolt loosening detection, comparing their effectiveness as object detection models. The effective evaluation metrics in object detection models are mean Average Precision (mAP) and Intersection over Union (IoU) as defined by the equations (1) and (2).
mean Average Precision (mAP) = Area under Precision and Recall curve
(1)
ℎ
Intersection over Union (IoU) =
(2)
A high mAP (mean Average Precision) value is an indication of strong object detection capabilities and demonstrating; (a) Improved Precision by Reducing false positives (b) Increased Recall effectively detecting the majority of objects present (c) Achieving balanced performance showing optimal trade-off between precision and recall, enabling it suitable for realtime applications.
Epoch Vs Mean Average Precision(mAP) for Faster RCNN and Mask RCNN
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8
Faster RCNN Mask RCNN
Mean Average Precision(mAP)%
0
5
10
15
20
25
Epoch
Fig.3.Mean Average Precision for Mask RCNN and Faster RCNN
Figure 3 illustrates the relationship between epochs and Mean Average Precision (mAP) for Faster R CNN and Mask R CNN, highlighting their performance trends across multiple training iterations. For Faster R CNN, mAP progresses steadily from 0.1 at epoch 1 to 0.58 at epoch 25, indicating a consistent enhancement in object detection accuracy as the model undergoes training. In contrast, Mask R CNN begins with a significantly higher mAP of 0.55, increasing to 1.75 by epoch 25, demonstrating superior performance, likely attributed to its segmentation capabilities that refine object boundaries. Mask R CNN model is trained using Detectron2 with a dataset from is validated with % accuracy. Since Mask R CNN consistently outperforms Faster R CNN in mAP, it underscores Mask R CNN's superior object detection accuracy, attributed to its instance segmentation capabilities, which facilitate more precise object distinction. A high IoU (Intersection over Union) value signifies strong performance in object localization. The detected object’s bounding box closely aligns with the ground truth box, reducing misalignment errors. The inferences can be drawn as; (i) The model is effectively identified and placing objects within their correct spatial boundaries. (ii) A high IoU value means the predicted and actual object regions have significant overlap, ensuring fewer false positives or negatives. (iii) In real-world applications, such as structural health monitoring or defect detection, higher IoU values reveal better identification of details within the images.
Made with FlippingBook - Online catalogs