PSI - Issue 70

Sahil Sehrawat et al. / Procedia Structural Integrity 70 (2025) 394–400

397

Fig. 2. Research technique shown as a flowchart

2.2.1. ML Technique Based on Decision Trees Supervised Decision Tree learning. Classification and regression are possible. The dataset must be used to create basic decision rules for a model that predictions a target variable. Deriving basic decision rules from trainingdata mayhelp create a training model that predicts target variable class or value. Decision trees classify records from the root. Comparing record and root attribute findings. The comparison sends that value's branch to thenextnode. Fig. 3 shows DT modelling.

Decision Tree Algorithm

Output

Data Set

Input

Decision tree model

Fig. 3. A representation of a decision tree that is schematic

2.2.2. The Random Forest (RF) Technique RF is a method used in supervised learning. The "bagging" technique generates a decision tree "forest". The bagging technique posits that combining learning models improves results. A variety of DTs assist RF in making more accurate and consistent anticipations. RF proves advantageous for both regression and classification tasks, which dominate the majority of machine learning workloads today. The hyper parameters of the DT and bagging classifier are akin to those of RF. RF classifiers can be used independently of DTs or bagging classifiers. It can also address regression problems using the regression of the RF algorithm. illustrates RF.

Made with FlippingBook - Online catalogs