PSI - Issue 59
Victor Aulin et al. / Procedia Structural Integrity 59 (2024) 444–451
447
4
Victor Aulin et al. / Structural Integrity Procedia 00 (2019) 000 – 000
where ij , ij – are the probable detection error of the1 st type (false malfunction), recognitionerrorofthe2 nd type (missingamalfunction)of i - defectof j -unit, system, and assemblyat the initial diagnostics stage respectively. In the expanded version, expression (5) takes the following form:
,
(6)
) {1 [ (1
(1 )]}
ij P
ij
ij
ij
ij
where ij , ij are the integer variables that take values of 1 or 0, respectively, for the recognition of defects of the 1st and 2nd types. In the general case, due to the erroneous identification of the i -th defect in the j -th unit, system, or assembly when, in reality, there is no such a defect (false malfunction), losses 1 роду C take place. The costs 2 роду C associated with the conditionally repeated carrying out of operation are formed by errors ij (missing a malfunction). Current production lossescan be represented as a functional:
1 type pp C C C C N , 2 type nd ( ) st TR
(7)
1 type st C – are the total losses caused by detecting any defectsof 1 st type; 2 nd type C
– are the total losses caused by
where
TR C – are the total losses caused by defects of a technical route of maintenance and
detecting any defectsof 2 nd type; repair operations complexes;
pp N – is the production program of the enterprise. Therefore, the reducing in internal production losses at the enterprise (Aulin, Zamota, Hrynkiv, Zamota, Chernai, 2018), where a complex of maintenance and repair operations is provided for the units, systems, and assemblies of automobiles, is possible only when the problem of reducing the absolute values of errors at all stages of production processes and maintenance and repair operations is solved. To improve the accuracy of defect recognition and prediction, reduce the training time of the ANN method, the incoming data on the input layer of the network should be standardized and fall within a specific range. This highlights the need for scaling the training dataset. The training dataset is presented to the input layer in the form of binary values, integers, floating-point numbers, etc. Their dispersion should be within a certain range, depending on the type of activation function being used. It should be noted that most frequently, the incoming signals have a wide range of values on each neuron in the input layer, which significantly reduces the ANN's meyhod ability to learn and analyze. The block diagram of the output data normalization algorithm during the training of the Artificial Neural Network method with sigmoid as the activation function is shown in Figure 2. In the process of the presented output data scaling algorithm's operation, the transfer to the activation function is performed in an optimal manner. This algorithm is universal, so when applied to an activation function different from the sigmoidal one, both the scaling interval and normalization type change at step 3. It should be noted that linear normalization is used when the variable interval i x is densely filled with values and is oriented towards boundary values ( min x , max x ). Taking into account the theoretical works of Hecht-Nielsen (Li et al., 2011) and the analysis of the practical applicability of the ANN method for various classification tasks, we can come to the conclusion that the use of more than two hidden layers of neurons in network design is in most cases impractical.
Made with FlippingBook - Online Brochure Maker