PSI - Issue 37

Claudia Barile et al. / Procedia Structural Integrity 37 (2022) 307–313 Author name / Structural Integrity Procedia 00 (2019) 000 – 000

309

3

Fig. 1. Configuration of Acousto-Ultrasonic Test Setup

Fig. 2. Architecture of Convolutional Neural Network

2.4. Input Data for Training CNN For training the CNN, which is based on image classification, a large set of training data is required. In this study, the time-frequency representation of the acoustic waveforms in their Mel scale is used. Mel scale is a perpetual scale in which human auditory system perceives pitches. Mel spectrograms are one of the most popular training data form for neural networks in general (Nasiri et. al., 2019; Lu et. al., 2019). First, the acoustic waveforms are rescaled to human auditory frequency range, so that they can be processed in Mel scale. The recorded waveforms at the sampling rate of 1 MHz are rescaled to 22 kHz (Meng et. al., 2019; Chuang et. al., 2019). All the acoustic waveforms recorded during this study are rescaled and processed as Mel spectrograms. A total of 30000 waveform images are used for training the CNN and its efficiency is tested with 15000 images. The RGB scale images with the size of 32 x 32 pixels is used for both the training and test dataset.

Made with FlippingBook Ebook Creator