PSI - Issue 17

Maria Apostolopoulou et al. / Procedia Structural Integrity 17 (2019) 914–923 Maria Apostolopoulou et al. / Structural Integrity Procedia 00 (2019) 000 – 000

918

5

Table 1. Parameters and statistics of database

Variables

Statistical Indexes

No

Name

Symbol

Type

Unit mm days

Min 0.70 7.00

Average

Max

STD 2.02

1 2 3 4 5 6 7 8

Max Diameter of Aggregates

MDA

3.51

16.00

Curing Time

CT

93.12

360.00 103.31

Material Encoding Parameter 1 Material Encoding Parameter 2 Material Encoding Parameter 3 Water-to-binder ratio (W/B) Binder-to-sand ratio (B/S)

- - -

MEP1 MEP2 MEP3

0 0 0

0.64 0.11 0.26 0.99 0.26 3.76

1 1 1

0.48 0.31 0.44 0.34 0.14 2.76

Input

(w/w) (w/w)

WB

0.46 0.09 0.36

1.81 0.71

BS CS

Compressive Strength

MPa

Output

15.20

In this work, a large number of different BPNN models have been developed and implemented. In particular, the following four (4) combination scenarios were examined: I. Two different types of encoding NHLs: in the once case, the type of natural hydraulic lime was incorporated as one input parameter, assigning the number 5 for NHL5, the number 3.5 for NHL3.5 and the number 2 for NHL2, while in the other case, the type of natural hydraulic lime was incorporated as three different input parameters; therefore in the later case, when one type of hydraulic lime was added it was assigned the number 1, while the other two input parameters related to the other two hydraulic lime types were assigned with the value 0 (therefore: 1,0,0 for mortars containing NHL5, 0,1,0 for mortars containing NHL3.5 and 0,0,1 for mortars containing NHL2), II. Two different cases regarding the number of input parameters. Specifically, in the one case the maximum grain size of the aggregate was taken into account, while in the other it was excluded, III. Two different cases related to the architecture of the neural networks, one with one hidden layer and the other with two hidden layers, and IV. Two different cases related to whether the data has been normalized through the use of the MinMax technique or not The combination of the above four scenarios resulted in the examination of a total of 16 (=2^4) different cases of neural networks. The development and training of the ANNs occurs with a number of hidden layers ranging from 1 to 2 and with a number of neurons ranging from 1 to 30 for each hidden layer. Each one of the ANNs is developed and trained for a number of different activation functions, such as the Log-sigmoid transfer function (logsig), the Linear transfer function (purelin) and the Hyperbolic tangent sigmoid transfer function (tansig) (Asteris et al. 2017, Cavaleri et al. 2017, Asteris et al. 2016, Psyllaki et al. 2018, Asteris et al. 2018, Nikoo et al. 2017, Nikoo et al. 2018, Nikoo et al. 2016, Asteris and Nikoo 2019). In order to have a fair comparison of the various ANNs, the datasets used for their training are manually divided by the user into training, validation and testing sets using appropriate indices to state whether the data belongs to the training, validation or testing set. In the general case, the division of the datasets into the three groups is made randomly. The amount of neurons in each hidden layer, as well as the three different transfer functions, result in a high number of neural networks for each of the 8 scenarios; thus these scenarios, for one hidden layer, correspond to 5400 neural network architectures, while for the case of two hidden layers the neural network architectures correspond to 486000. Therefore, the number of neural networks designed and trained amounted to a total of 3931200 (=8×5400+8×486000). Each one of these ANN models was trained over 152 data-points out of the total of 253 data points, (60%) of the total number) and the validation and testing of the trained ANN were performed with the remaining 101 datasets. More specifically, 51 data-points (20%) were used for the validation of the trained ANN and 50 (20%) data-points were used for the testing. The developed ANN models were sorted in a decreasing order based on the RMSE value. Based on this ranking, the optimum ANN model for the prediction of the compressive strength is that of 7-12-25-1 that corresponds to the

Made with FlippingBook Digital Publishing Software