PSI - Issue 62
Fabio Severino et al. / Procedia Structural Integrity 62 (2024) 276–284 Severino et al. / Structural Integrity Procedia 00 (2019) 000–000
278
3
(datasets) of prepared examples, which include specific instances (data points) together with human-assigned or historically-recorded values (labels or judgements) that indicate what the ML system should predict the next time that it is presented with similar cases. The learning can also be based on unlabeled datasets (thereby substantially reducing the human effort needed for the preparation of the training dataset), as long as some form of reward or fitness function or error measure can be provided. In such cases, the ML system would initially take random decisions, but in time improve based on the feedback that is provided on the decision. In other schemes, the feedback is not encoded by a well-known function defined a priori, but is generated by a different ML system (so-called adversarial learning) or real-world feedback. For AI systems built on symbolic, rule-based programming, access to expertise is essential. In such systems, often called expert systems (ES), expert knowledge (hence the name) is encoded into the system's rules, in what is essentially a special form of programming. For instance, in an expert system for cost estimation, the designers would need to elicit and encode general rules from experts in the field, to be applied later to specific cases. Then, these rules would be applied to the specific data for a given instance, resulting in the corresponding output or judgment. The exact form of encoding, and the mechanism by which rules are applied so that the intended result is obtained, give rise to a certain variety of techniques; however, for our purposes such differences are not relevant, and we will not here discuss specific techniques. Another popular technique consists in learning symbolic rules from data instead of explicitly programming them. This can be obtained by using statistical properties of the dataset to generate, for example, decision trees. These approaches have the advantage that the decisions taken in future cases are, at least to some extent, explainable by showing what computation the system has performed to get to the conclusion. AI systems can be applied to several different classes of problems: classification (assigning instances to one of a set of pre-defined classes), anomaly detection (identifying “odd” instances in a large population), clustering (collecting similar instances together), translation (mapping data, sequences or structures in one domain to corresponding elements in another domain), generation, etc. The latter (often termed Generative AI or GAI) has attracted much attention lately, and is based on the idea that given an initial prompt, a trained AI can generate an entire artifact. The prompt can be in the same domain (e.g., from a sketch to a full picture, or from a textual request to a textual answer) or in a different one (e.g., from a textual description to a full picture matching the description). In all cases, an AI system is manifested in a software artifact that we call the model of the system; for an expert system this could be the set of rules, whereas for a neural network this would be an encoding of the neural architecture together with a set of matrices of numerical weights associated with the components of the architecture. Regardless of the specific approach, in order to build trustworthy systems all these different artifacts (models, datasets, classes, prompts, instances, etc.) must be themselves trustworthy. To a certain extent, trust can be placed in the author(s) of the artifacts, but to establish stronger assurances, one must be able to retrieve and validate – in addition to the model – all the artifacts that have contributed to the training of an ML system or to the programming of an ES, as well as the data about the specific instance that is submitted to the AI system together with the supposed corresponding output. 2.2. Trustworthy AI Framework In Canciani et al. (2023) a framework for the development of Trustworthy AI has been presented. The framework aims to manage the fundamental phases in the life cycle of an AI system (data preparation, model training, and inference generation) enabling a shift from the traditional accuracy-based paradigm to an approach where trustworthiness becomes an integral part of the design of AI systems. The framework presents a technological solution to address five key requirements detailed in the “Ethics Guidelines for Trustworthy Artificial Intelligence” by the High-Level Expert Group on AI (2019): ● Human agency and oversight: AI systems should support users in making informed decisions aligned with their goals; ● Technical robustness and safety: AI systems should be accurate, resilient to attacks and their results should be reproducible; ● Privacy and data governance: AI systems must ensure privacy and data protection throughout their entire life cycle; ● Transparency: the datasets and processes involved in an AI system's decision-making should be transparent and traceable;
Made with FlippingBook Ebook Creator