PSI - Issue 17
Danial J. Armaghani et al. / Procedia Structural Integrity 17 (2019) 924–933 Danial J. Armaghaniet al. / Structural Integrity Procedia 00 (2019) 000 – 000
928
5
account in the output parameter of shear strength (v) which is defined (Eq. 1) as the ratio of shear force to the product of the width and the effective depth. Case D. The fourth case takes into account for the modelling of the reinforced concrete beam under shear loading only four parameters. Namely, the input parameters are: i) the product of the longitudinal reinforcement and the associated reinforcement ratio, ii) the product of the transversal reinforcement and the associated reinforcement ratio, iii) the shear span to the effective depth ratio of beam (a/d) and iv) the effective span to the effective depth ratio of beam (L/d). In what concerning the output parameter the shear strength in dimensionless terms as defined by the following equation has been used: = (2) where c is the cylinder compressive strength of concrete The above four different cases have been investigated for two different cases related to the architecture of the neural networks, one with one hidden layer and the other with two hidden layers, and for two different cases related to whether the data has been normalized through the use of the MinMax technique or not. The combination of the above four scenarios resulted in the examination of a total of 16 (=4×2×2) different cases of neural networks. The development and training of the ANNs occurs with a number of hidden layers ranging from 1 to 2 and with a number of neurons ranging from 1 to 30 for each hidden layer. Each one of the ANNs is developed and trained for a number of different activation functions, such as the Log-sigmoid transfer function (logsig), the Linear transfer function (purelin) and the Hyperbolic tangent sigmoid transfer function (tansig) (Asteris et al. 2017, Cavaleri et al. 2017, Asteris et al. 2016a, Psyllaki et al. 2018, Asteris et al. 2018, Nikoo et al. 2017, Nikoo et al. 2018, Nikoo et al. 2016, Asteris and Nikoo 2019). In order to have a fair comparison of the various ANNs, the datasets used for their training are manually divided by the user into training, validation and testing sets using appropriate indices to state whether the data belongs to the training, validation or testing set. In the general case, the division of the datasets into the three groups is made randomly. The amount of neurons in each hidden layer, as well as the three different transfer functions, resultin a high number of neural networks for each of the 16 scenarios.Therefore, the number of neural networks designed and trained amounted to a total of 3931200 (8×5400 NN architectures with one hidden layer and 8×486000 NN models with two hidden layers). Each one of these ANN models was trained over 160 data-points out of the total of 300 data-points, (60%) of the total number) and the validation and testing of the trained ANN were performed with the remaining 120 datasets. More specifically, 60 data-points (20%) were used for the validation of the trained ANN and 60 (20%) data-points were used for the testing. For the evaluation of the developed ANN models four performance indexes have been used. Namely, the values of the Pearson correlation coefficient (R), the root mean square error (RMSE), the mean absolute percentage error (MAPE), and the variance (VAF) are presented. Furthermore, a recently proposed new engineering index called a20 index has been also used for the evaluation of the performance of NN models. Namely, the a20-index, is proposed for the reliability assessment of the developed ANN models which are expressed by: 20 − = 20 (3) where M is the number of dataset sample and m20 is the number of samples with value of rate Experimental value/Predicted value between 0.80 and 1.20. Note that for a perfect predictive model, the values of a20-index values are expected to be unity. The proposed a20-index has the advantage that their value has a physical engineering meaning. It declares the amount of the samples that satisfies predicted values with a deviation ±20% compared to experimental values.
Made with FlippingBook Digital Publishing Software