PSI - Issue 80
Nima Rezazadeh et al. / Procedia Structural Integrity 80 (2026) 411–417 Author name / Structural Integrity Procedia 00 (2019) 000 – 000
413
3
2.2. Architecture overview The framework converts each vibration record into a numeric feature vector and inputs it into a neural network for fault classification. A companion branch aligns source and target data in a shared feature space to enable transfer across operating conditions. During training, the model updates class prototypes and assigns attention weights to target samples based on similarity and prediction confidence. These weights enhance reliable examples and reduce the influence of uncertain ones, aiding adaptation without overfitting. Prototype-to-instance similarity plots further provide technicians with clear diagnostic insights. 2.3. Prototype construction To capture class-level structure, the framework maintains 2 centroids or prototypes for each fault class : one for source data ( ) and one for target data ( ). After each mini-batch, these prototypes are updated with a momentum term , so they reflect both past knowledge and new observations. The source prototype moves toward the mean of embeddings whose true label is , whilst moves toward the mean of embeddings whose current pseudo-label is . ← (1 − ) + mean ∈ ( ) , ← (1 − ) + mean ∈ ( ) (1) Selecting different momentum parameter changes how quickly prototypes react to new data. The above smooth update ensures prototypes track gradual shifts without overreacting to noise. 2.4. Attention weighting In the attention-weighting scheme, each target embedding receives a weight based on its cosine similarity to its class prototype ̂ , scaled by temperature , and its prediction confidence measured by normalized entropy . A blending factor balances similarity and confidence. This assigns higher weights to confident, well-aligned samples while reducing the influence of uncertain or distant ones. = 1 cos ( , ̂ ) (2) = − lo1g ∑ , log , =1 (3) = ((1− )+ )(1− ) (4) with , denoting the softmax probability for class . 2.5. Loss functions Training jointly optimizes three objectives within a unified loss function. The source classification loss ensures accurate predictions on labeled source data. The weighted target consistency loss encourages confident target predictions to match their pseudo-labels. The domain adversarial loss , applied via a gradient-reversal layer, aligns global feature distributions between source and target domains. Prototype alignment is handled implicitly through the update-and-weight mechanism (Sections 3.3-3.4) rather than an explicit loss. The total loss combines all components as: = ( , )∈ [CE( ( ), )] (5) = ( , , ̂)∈ [ CE( ( ), ̂)] (6) =BCE(ℎ ( ), 1) + BCE(ℎ ( ),0) (7) = + + (8) where and control the relative strength of the target‐consistency and adversarial losses, respectively.
Made with FlippingBook - Online catalogs