PSI - Issue 14
Vamsi Inturi et al. / Procedia Structural Integrity 14 (2019) 937–944 Vamsi, Sabareesh, Vaibhav/ Structural Integrity Procedia 00 (2018) 000–000
942
6
(4) The obtained signal x 1 (t) is checked whether it is following the IMF criterion or not. (5) If x 1 (t) satisfies the IMF criterion, x 1 (t) is the elected as first IMF and x (t) is replaced with the residual, r I = x (t) - x 1 (t). (6) If the above condition is not satisfied, x (t) is replaced by x 1 (t) and the process gets repeated until a monotonic residual is obtained. The last step involves extraction of statistical parameters such as mean, median, mode, kurtosis, skewness, maximum value, minimum value, etc. for the obtained IMFs. The idea is to find out at least one statistical parameter which can efficiently distinguish a healthy signal from a faulty signal. Kurtosis is the flatness or spiky nature of the signal. In general, its value is low for a healthy component and increases during the premature stages of fault. The kurtosis value for a healthy signature is observed to be close to 3 [17]. As such, IMF 8 is chosen for the present study as the kurtosis value for Y -component of tri-axis accelerometer was obtained as 3.7. Therefore, the feature extraction analysis is performed on IMF 8 for vibrational ( X , Y , Z and radial acceleration) and acoustic (mic1 and mic2) data and classification accuracies are compared using decision tree algorithm. 4. Feature selection Numerous statistical features that are extracted may not contribute to the diagnosis effectively. As such, identifying the most contributing features is an important task. Decision tree is an inverted tree shaped representation executed to identify the most significant features [18]. All of the extracted features are given as input to decision tree and the output is a binary tree. Generally, decision tree provides the information about the features in the pattern of ‘if-then’ guidelines. The features capable of influencing the classification accuracy appear in the tree in the descending order of significance and the features that do not have a contribution towards the classification accuracy have been discarded. The decision tree consists of one root, a number of branches, a number of nodes and a number of leaves. The most dominant feature used for classification can be found out at the root of the tree. The statistical features extracted from IMF 8 are given as input to the J48 algorithm (WEKA implementation) and the output is a decision tree as shown in Fig. 4. Generally, decision tree provides the information about the features in the pattern of ‘if-then’ guidelines. As it can be seen from the figure, standard error, kurtosis, median and skewness are appearing at the top nodes of the decision tree and therefore, are the dominant features.
Fig. 4. Decision tree generated using J48 algorithm for IMF8 data
Made with FlippingBook Annual report maker