PSI - Issue 62

Laura Ierimonti et al. / Procedia Structural Integrity 62 (2024) 832–839 Ierimonti etal. / Structural Integrity Procedia 00 (2019) 000 – 000

835

4

2.4. Bayesian model class selection Bayesian model class selection (BMCS) is a probabilistic-based technique employed to discern the best fitting model from a pool of candidates, relying on input-output data. In the realm of SHM, these competing models often encapsulate distinct damage parametrizations linked to various damaging mechanisms. The essence of BMCS lies in harnessing Bayesian statistics (Yuen, 2010), a methodology that integrates prior knowledge to refine the probability of a hypothesis at a specific time. For the selection of the best fitting model among the candidates, the Bayesian Information Criterion (BIC) can be adopted. The computation of BIC is expressed as: (ℳ) ൌ −2 ⋅ Ž‘‰ ሺ ሺ ℳ)) ൅ ⋅ Ž‘‰ ሺ ሻ (4) where: ℳ denotes the selected mathematical model class; ሺ ℳ ሻ represents the likelihood function which assesses the agreement between measured and predicted data at time ; denotes the number of model parameters, and stands for the number of data points. The BIC criterion achieves a balance between model fit (likelihood) and complexity, penalizing the inclusion of parameters and accounting for sample size. This approach provides a principled method to select a model that effectively fits the data, preventing overfitting. In the context of BMCS, the model with the lowest BIC value is selected as the most suitable, striking a compromise between the goodness of fit and model simplicity. 2.5. Dynamic Bayesian networks Dynamic Bayesian Networks represent a powerful class of probabilistic graphical models, uniquely suited for tackling a diverse array of problems (Koller and Friedman 2009). Their structured design facilitates the systematic computation of conditional probabilities grounded in available evidence, offering a transparent and intuitive understanding of how objects interact over time. Consequently, DBNs find extensive application in domains requiring automated decision-making. At their core, a BN serves as a model for capturing the joint probabilities of the events it represents. The construction of a Bayesian Network typically involves defining three key components: (i) nodes, which depict random variables; (ii) edges, which establish relationships between variables, also designated as parent and child; (iii) conditional probabilities, which relate the nodes. The behaviour of each child node, i.e., of the variables representing the node, is primarily influenced by its parents. These connections are pivotal in comprehending how different factors interact and are indispensable for making well- informed decisions based on available evidence. Let ℊ ሺ ሻ be a Bayesian Network graph over the variables ͳ ሺ ሻǡ … ǡ ሺ ሻ . It is possible to evaluate the distribution ሺ ͳ ǡ … ǡ ሻ according to the chain rule for BNs (Koller and Friedman 2009): ሺ ǡ … ǡ ǡ ሻ ൌ ሺ ሺ ሻ ∣ ƒ ℊ ǡ ሻ ͳ ൌͳ (5) where ƒ represents the parents of and ሺ ሺ ሻ ∣ ƒሻ are the conditional probability distributions of , which may vary over time based on evidence. Indeed, results based on a BN can be updated in two primary ways: by adjusting conditional probabilities, reflecting new information or prior knowledge, and by making inferences on node attributes, allowing for updates and refinements as new data becomes available. This can be achieved using various algorithms, such as exact or approximate inference algorithms. In the present paper, the variable elimination algorithm is used which consists of marginalizing out the variables that are not in the subset of the variables involved in the evidence process: ሺ ǡ ሻ ൌ ∑ ∖ ሺ ∪ ሻ ሺ ሻ (6) where represents the remaining variables among those in , ∑ ∖ ሺ ∪ ሻ denotes the summation over all possible values of the variables in , excluding those in the sets and . Specifically, the variable elimination process

Made with FlippingBook Ebook Creator