PSI - Issue 44
Federico Ponsi et al. / Procedia Structural Integrity 44 (2023) 1538–1545 F. Ponsi et al./ Structural Integrity Procedia 00 (2022) 000 – 000
1540
3
φ
φ
(
)
(
)
exp,
m
num,
m
, 0 Σ
:
e
l
=
−
:
0,
e f
f
N
(2)
exp, = − m
;
N
φ
φ
m
num,
f
m
f
φ
φ
m
m
m
m
num,
m
exp,
m
2
2
with the scaling factor l m defined as:
exp, φ φ φ φ exp, T m
num, m m
m l =
(3)
num,
m
2
2
Under the assumption of statistical independence of the identified modal properties, the likelihood function can be written as the product among the N m Gaussian distributions with mean f exp, m and standard devi ation σ fm and the N m multivariate Gaussian distributions with mean vector φ exp, m and covariance matrix Σ φ m ( m =1, …, N m ). The covariance matrix is usually assumed to be diagonal meaning that no correlation is considered between different mode shape components. The variance of the frequency prediction error is expressed as σ fm 2 = ε f 2 f exp, m 2 , while the covariance matrix of the mode shape prediction error is expressed as Σ φ m = ε φ 2 || φ exp, m || 2 2 I , where I is the identity matrix. According to the previous assumptions, the likelihood function can be defined as:
1 2
(
)
( ) x
(4)
d x
| ,
exp
p
M q =
J
−
1
where q 1 , that is a function of the coefficients of variation ε f and ε φ , is a normalizing factor. J ( x ) is a discrepancy function defined as:
2
2
( ) x
( ) ( ) num, φ x φ x num, m m
φ
f
f
−
m N
m N
1 f
1
1
( ) x
num,
exp,
exp,
m
m
m
J
l
=
+
−
(5)
m
2
2
2
f
φ
φ
1
1
m
m
=
=
exp,
m
exp,
m
2
exp,
m
2
2
2
2.1. Bayesian selection of the optimal coefficients of variation When dealing with the construction of the likelihood function, incorrect assumptions regarding the characteristic of the prediction error, namely the value of the coefficients of variation ε f and ε φ , may unfairly influence the Bayesian updating results. In this regard, the Bayesian inference framework enables to make use of the available data and to include the error parameters in the updating process in order to identify the characteristics of the prediction error (Simoen et al. 2015). Bayesian model class selection (BMCS) is an additional level of model updating where the focus is addressed to the selection of the most plausible model class from a set of alternatives according to the measured data d . In our case, a model class is defined by a specific value of the coefficients of variation ε f and ε φ . Considering a discrete set of model classes M ={ M k : k=1,2,..., N MC } the Bayes' theorem expressed at model class level updates the prior probability P ( M k | M ) into the posterior P ( M k | d , M ) through the information contained in d :
( ) ( p M P M d |
)
M
|
( P M
)
k
k
d M
| ,
(6)
=
(
)
k
| d M
p
If all the model classes are considered equally plausible a priori, the posterior probability depends exclusively on the factor p ( d | M k ), which is the Bayesian evidence for the model class M k , previously introduced as c in section 2 for a generic model class M . The denominator of Eq. (6) is a constant ensuring that the sum of the posterior probabilities related to all model classes gives 1.
Made with FlippingBook flipbook maker