PSI - Issue 62

Fabio Severino et al. / Procedia Structural Integrity 62 (2024) 276–284 Severino et al. / Structural Integrity Procedia 00 (2019) 000–000

282

7

the specific non-repudiable operation that caused the discrepancy can readily be identified, and responsibility assigned. In line with the principles of trustworthy AI, and more specifically with the requirement about “Human agency and oversight” (High-Level Expert Group on AI (2019)), the Structural Defects Recognition service does not intend to replace the role of human surveyors, but rather assists them in their work by allowing them to use the generated inferences as an additional new tool. Furthermore, using the inferences produced by the Structural Defects Recognition service does not increase the surveyor's liability: thanks to the data lineage, the surveyor can irrefutably demonstrate any inference’s origin and will not be personally held accountable in case the inferences provide an imprecise classification of defects. Instead, through external auditing, it will be possible to trace any responsibilities, or otherwise to prove the good faith and correct execution of processes by all involved organizations. We believe this application to be of particular interest because, by making the monitoring processes of public infrastructures reproducible and auditable, all involved parties would benefit: ● Infrastructure manager : would be able to increase the level of internal auditing, enhance the consistency and homogeneity of the evaluations made by its personnel performing the surveys, and would have a technological tool to enforce the rules and constraints for the tracked processes, thereby reducing the risks associated with maintenance activities. ● Regulators and third-parties : can have a blockchain node to continuously monitor the tracked processes. When sending inspectors to an infrastructure maintenance, the availability of photographic and documentary material collected during all previous surveys by the infrastructure manager is guaranteed, and most importantly, its provenance can be verified. ● Infrastructure end users : it becomes practical to make accessible to the end users of the infrastructure a selection of verifiable metrics related to maintenance activities (e.g., frequency of checks, measure of the overall state of the infrastructure). 3.2. Risk Classification of Infrastructure by Explainable AI Within the context of trustworthy AI, explainability plays a central role. In the “Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self-assessment” by the High-Level Expert Group on AI (2020) set up by the European Commission, AI explainability is defined as “the ability to explain both the technical processes of the AI system and the reasoning behind the decisions or predictions that the AI system makes”. Explainability is crucial for trustworthy AI systems, but it has limitations that are also recognized by the High-Level Expert Group on AI (2019); for instance, several kinds of widely used AI approaches (e.g., neural networks) generates outputs in a way that cannot be made intelligible in terms of human reasoning, and therefore they are called “black box”. For black box systems, particularly when used in scenarios where incorrect or inaccurate outputs could have serious consequences for human safety, it is required to compensate for their opacity by enhancing traceability and auditability (High-Level Expert Group on AI (2020)). The sector of critical infrastructure maintenance is clearly one of the scenarios where explainability is particularly critical. The trustworthy AI framework used in this paper is beneficial for enhancing the explainability of any AI system, whether black box or not, as it introduces mechanisms for traceability and auditability. In Natali et al. (2023), an explainable AI system for the assessment of level of safety and monitoring of the existing bridges was presented. In that system, explainability was achieved through an eXplainable Artificial Intelligence (XAI) algorithm, an expert system that overcomes the usual pitfalls of conventional black box approaches. For this use case, we propose an improvement over the approach presented in Natali et al. (2023), by integrating the expert system within the trustworthy AI framework, and thus as previously discussed and in accordance with the European "Ethics guidelines for trustworthy AI", further improving its level of explainability. The solution involves the use of the blockchain to track the process of creation and maintenance of the XAI system. Inside a ledger of the blockchain are stored all the versions of the expert system, the description of the reasons that necessitated a new version (for example, an update to the “Guidelines for classification, risk management, assessment of level of safety and monitoring of the existing bridges”), and the AI system certification. The certification consists of a compliance statement recorded in the ledger by an internal certifier who verifies the consistency with the guidelines. Using the blockchain, the execution of every operation of the development process

Made with FlippingBook Ebook Creator