PSI - Issue 77
5
Hugo Mesquita Vasconcelos et al. / Procedia Structural Integrity 77 (2026) 601–610 Hugo Mesquita Vasconcelos/ Structural Integrity Procedia 00 (2026) 000–000
605
2.5. Hierarchical Multi-head Classification Framework A multi-head classification framework was implemented based on a ResNet-18, which was introduced by He et al. (2015), a backbone that enables the evaluation of different training processes. The backbone extracts latent embeddings from each input representation, which can then be passed to three independent classification heads, each corresponding to a specific stage of hierarchical complexity: 7-pre-operational, 8-primary, and 9-concrete. The pre-operational head performs a binary discrimination between background noise and vessel presence. The primary head provides an intermediate categorization into five vessel-size groups: micro, small, medium, big, and other. Micro includes pleasure craft and sailing vessels. Small, includes pilot vessels and fishing vessels. Medium includes rescue and passenger ships. Big includes cargo, tanker, and dredger classes. Other includes the samples labelled as other. The concrete head performs the complete identification across all twelve defined classes. The same training framework governs all experimental configurations by activating different combinations of heads according to the test being performed. The relative contributions of the three heads are governed by the coefficients α, θ, and β, which evolve across epochs according to predefined schedules. In the single concrete stage configuration, which approximates a traditional AI unstructured learnin g approach, only the concrete head (β) is active, corresponding to direct multi-class classification, although in the actual evaluation, a ramp-down alpha was present, which was considered not to affect the mirroring of the AI traditional approach. In the two-stage configuration, the pre- operational (α) and concrete heads are trained sequentially, with the binary stage active during the first eight epochs before transitioning to the multi-class head. In the full hierarchical learning configuration, all three heads are progressively engaged, beginning with the pre-operational head for eight epochs, introducing the primary (θ) head between the fifth and tenth epochs, and finally activating the concrete head at epoch fifteen. This structured progression tries to mirror the increasing task complexity described by the MHC and emulates the hierarchical learning process it proposes. All configurations are trained for forty epochs using identical hyperparameters and batch size, with the optimal epoch selected based on the macro-averaged F1-score of the concrete head. This adaptive framework allows systematic comparison of distinct hierarchical learning strategies within a unified implementation, thereby intending to isolate the effect of hierarchical task complexity learning on overall learning efficiency and model stability.
Fig. 2. Distribution of the three different heads per training evaluation
3. Results 3.1. Traditional AI Approach
In the traditional unstructured AI configuration, only the concrete head (β) was supposed to be active for task evaluation for all 40 epochs. Although, as seen in Figure 2 Traditional AI graph, an initial α head coefficient was present decreasing during early epochs, this was not considered to affect the unstructured learning.
Made with FlippingBook flipbook maker