PSI - Issue 52

396 Vinit Vijay Deshpande et al. / Procedia Structural Integrity 52 (2024) 391–400 Author name / Structural Integrity Procedia 00 (2019) 000 – 000 damaged regions are formed throughout the microstructure normally in the struts. It can be seen in Fig. 5b that it takes less stress to develop a certain number of damaged regions in uniaxial cases. For cases = 25˚ and 65˚, it takes higher stress to generate the same number of damaged regions. The stress is the highest in the case of = 45˚. This can be justified from the fact that the predominant damage mechanism during compression of foams is bending of the struts. In the presence of lateral pressure (cases of biaxial compression), this bending is prevented and as a result it takes higher value of remotely applied stress to cause the bending failure. 4. Neural network based surrogate model This section describes a neural network based surrogate model developed to predict the stress-strain response of the foam volume element subjected to biaxial compression. Three neural networks namely outer, inner and hybrid are developed with an objective to predict the macroscopic stresses 11 and 22 of the volume element with edge length 150 pixels at strain loading of = 45˚ (test data). The training data consists of stress-strain responses for load cases = 5˚ to 85˚ (except = 45˚) with increments of 5˚ along with uniaxial compression cases along 1 and 2 - direction (refer Fig. 2a). Each stress-stain curve has 41 stress-strain data pairs resulting from the simulation step size (refer Fig. 4). The neural networks are trained on different amounts of data. The first training dataset consists of the two uniaxial compression cases (named dataset 1). The second training dataset (dataset 2) has = 5˚ and 85˚ load cases along with the previous ones in dataset 1. Dataset 3 has = 10˚ and 80˚ as well. This way the training dataset keeps increasing in size till dataset 9 that covers = 40˚ and 50˚ as well. Fig. 6 shows selection of training datasets. The curly brackets with numbers indicate training datasets. 6

Fig. 6. Training datasets for neural networks (datasets 2-4) formed by incrementally adding data from both sides (1 and 2-direction).

4.1. Outer and inner neural networks

Two feed-forward neural networks are created to be trained on data generated by averaging FE results on four statistically equivalent microstructures. The first network (outer network) is trained on data from volume element with edge length 150 pixels. This size was adopted in section 3. The second network (inner network) is trained on data from volume element with edge length 100 pixels (0.26 mm). As this is a smaller volume element, the results are not representative but still gives useful information about the material response. The architecture of both neural networks is given in Fig. 7. Input layer is made up of the two macroscopic strains and the output layer has two macroscopic stresses. Through model selection process, it is decided to adopt 2 hidden layers with 30 neurons each and L2 regularization of value 0.01. Activation function ‘relu’ first introduced by Nair and Hinton (2010) is used in all the layers. Adams optimizer proposed by Kingma and Ba (2014) is used along with mean squared error as loss function. The networks are implemented in Python (Van Rossum and Drake (2009)) using TensorFlow (Abadi et al. (2016)) and Keras (Chollet at al. (2015)) frameworks.

Made with FlippingBook Annual report maker