Issue 59

T.-H. Nguyen et alii, Frattura ed Integrità Strutturale, 59 (2022) 172-187; DOI: 10.3221/IGF-ESIS.59.13

Dataset

Weighted dataset

Weighted dataset

Strong classifier

Weak classifier #1

Weak classifier #2

Weak classifier #3

Figure 1: Illustration of the AdaBoost.

Assume that the training dataset contains N samples {( x i , y i }| i =1,…, N } where x i is a vector that comprises input features, y i  {+1,  1} is the output feature. Firstly, a set of the same weight D 1 ={ w 1, i =1/ N | i =1,…, N } is assigned to all samples. A weak classifier C 1 is then trained by the weighted dataset. Secondly, the error rate of the trained classifier C 1 is determined using the following formula:

N

   1 i x

  1 1,

err

w I C

y

(7)

i

i

i

1

where: I (.) is a function that returns 0 if the prediction is correct while 1 if the prediction is wrong. Third, the weight  1 of the weak classifier C 1 is computed from its error rate:

   err

 1 ln 1 2

1

 

 1

(8)

 

err

1

Next, the set of weights is adjusted to D 2 ={ w 2, i | i =1,…, N } for the second iteration, in which the weight w 2, i is calculated as follows:       exp i i i w y C x 1, 1 1

w

(9)

i

2,

 N

w

i

2,

i

1

The weak classifier C 2 is then trained by the new weighted dataset. After T iterations, there are totally T weak classifiers that are sequentially trained. For new sample x new , the label y new is estimated by taking the weighted sum of the predictions of these weak classifiers:

 

  

T

  1 t

   t t new C x

  sign

y

(10)

new

where: sign(.) is a function that extracts the sign of a real number. The weak classifier used in the AdaBoost can be any machine learning classification algorithm, but the Decision Tree is commonly used. The background theory of the Decision Tree is not presented here since it is out of the scope of this article.

176

Made with FlippingBook Digital Publishing Software