PSI - Issue 64

Christoph Brenner et al. / Procedia Structural Integrity 64 (2024) 1240–1247 Christoph Brenner et al./ Structural Integrity Procedia 00 (2019) 000 – 000

1243

4

2.2. Development of the simulation model and virtual sensor measurements

A detailed geometric model, derived from 2D drawings, captures the bridge's intricate features, including non symmetrical cross-sections, cross slopes, and longitudinal superelevation (see Fig. 2). The geometry is meshed using shell and beam elements. To reduce the computational effort for creating the model database while preserving accuracy and geometry detail, the Static Condensation Reduced Basis Element method (Huynh et al. (2013)) is employed. Advantages of this approach include: • Significant reduction in computational effort during online simulation, maintaining accuracy without the need for simplified geometries or manual sub-models. • Flexible component replacement enabled by static condensation, facilitating model adaptivity. • Parameterization of each component's stiffness, crucial for adaptivity. The bridge ’s model is divided into 81 components, with each component’s stiffness parameterized using a strength factor applied to the initial stiffness ,0 . Each parameter can be set to five possible states ∈ {0,1,2,3,4} ∙ 10 −4 where =0 denotes the undamaged state = ,0 ∙ (1 − ) (2) The cracked model is simulated using a high-fidelity Finite Element model. To mitigate measurement noise in the creation of the model database, white noise is added to the sensor measurements by randomly drawing from a normal distribution with a small standard deviation (0.01%). The small standard deviation prevents noise from overshadowing changes in measured values compared to the undamaged state. For each measured damage state in the model database, 100 samples with random artificial noise are generated. Since a simultaneous prediction of all 81 model parameters is not possible with the existing methods, a reduction of the parameter space is necessary. In this case, the location of the damage is spatially restricted by engineering considerations based on regular visual inspections which is taken into account accordingly when selecting the parameters. Therefore, only six parameters in the immediate vicinity of the damage are varied in the model database. In order to reduce the computational and training effort for the OCTs, 1,200 samples are drawn from all possible parameter combinations using Latin Hypercube Sampling. In combination with the artificial noise described above, 120,000 noisy samples are available. The dataset is divided into 70% training data and 30% test data. 2.3. Optimal classification trees Classification trees, also known as decision trees, are essential in machine learning for hierarchical classification based on selected features. Each internal node represents a decision based on a feature, leading to subsequent nodes or leaf nodes indicating class labels. Sensor readings correspond to features in this context, aiming to predict stiffness parameters as class labels. Compared to "black box" models like neural networks, classification trees offer interpretability, crucial for tasks such as bridge maintenance to detect sensor errors. Building a classification tree involves recursively partitioning the feature space to maximize the homogeneity of resulting nodes, measured by metrics like Gini impurity. Training entails selecting the best feature to split at each node, with iterative improvement until a stopping criterion is met, such as reaching a maximum tree depth or no purity improvement. Techniques like pruning can prevent overfitting. In Optimal Classification Trees (OCT), all splits are optimized simultaneously, ensuring global optimality (Bertsimas and Dunn (2017)). This approach achieves high accuracy comparable to complex models such as Random Forests while maintaining interpretability. Hyperplane splits, involving linear combinations of features, are possible as decision criteria. OCT's structure and complexity depend on various hyperparameters, including tree depth, a minimum number of samples per leaf, split types, and the sparsity of hyperplane splits. Optimal hyperparameter values are determined during training to maximize accuracy while preserving interpretability.

Made with FlippingBook Digital Proposal Maker