PSI - Issue 64

Shirley J. Dyke et al. / Procedia Structural Integrity 64 (2024) 21–28 Dyke et al / Structural Integrity Procedia 00 (2019) 000–000

24

4

many cases. Thus, this work begins by introducing an AI method to automate the classification of bridge substructures from images, aiming to collect the substructure information to thereby streamline seismic vulnerability assessments across large bridge inventories. By supporting the time-consuming task of manually gathering this data that is essential for such purpose, this approach enables quicker, more efficient assessments. The method was validated using a dataset generated from Indiana's bridge asset management system, and found to accurately identify substructure types. Some sample images are shown in the second row of images in Fig. 1. This application serves as one example of using AI to support infrastructure management in seismic-prone areas. Another sample of an application to use AI toward bridge safety is related to inspection and condition rating. Zhang et al. introduces an AI-based method for assessing crack conditions in concrete bridge decks, aiming to meet Federal Highway Administration (FHWA) inspection requirements (2023c). The focus of this work is on condition state assessment of concrete decks with respect to cracking. Utilizing deep learning, the method automates the classification and segmentation of cracks in reinforced concrete bridge decks, facilitating the determination of the condition state as per FHWA guidelines based on a set of images collected from a single bridge deck. Image classification and semantic segmentation are proposed to process the set of inspection images, thereby enabling the identification and classification of cracks with good accuracy. This process not only significantly reduces the manual effort required to perform bridge inspections, but also enhances the consistency of condition state assessments, potentially offering a promising approach to aid human inspectors in assessments and maintenance in the era of rapidly aging infrastructure. 3. Understanding Image Context for Structural Engineering While Section 2 focuses on the development of sophisticated tools to automatically establish the content of individual images within large cluttered sets of real-world data, this is just one piece of the puzzle. To really understand and use the data collected, context is a critical element. Engineers, for example, require structural drawings to comprehend the spatial arrangement of the structural elements of a building or bridge. The consequences of the damage that is found using the techniques discussed in Section 2 on the overall performance of a given structure will be highly dependent on the location of that damage. Thus, knowing the location where each photo was taken within a building adds another layer of valuable information. Furthermore, merging data sets collected by different teams, perhaps for different research purposes, to create one a unified dataset for a given structure is also quite important to obtain comprehensive information about that building. Such contextual details are extremely important for using the information in the images to assess performance and apply engineering knowledge to make decisions. Liu et al. introduce a pioneering approach designed to localize images collected within buildings (2020). This method is especially useful for post-event assessments following natural disasters as it does not require costly additional equipment but minimally modifies the procedure used by reconnaissance teams. This innovative method capitalizes on visual odometry to reconstruct 3D point cloud models from sequences of images captured inside buildings. Initially the method was developed for a single floor of a building, but the method was later extended to apply to several multi-story buildings (Liu et al., 2023). Such an approach facilitates the automatic mapping of images onto structural drawings as well as the creation of detailed 3D textured models for specific building components (Figure 2). This methodology is meant to provide spatial context needed for thorough post-disaster assessments in GPS-denied environments. The authors also automate image overlay on structural drawings. Overall minimal manual intervention is needed to map the images to the drawings, and there is no need for costly 3D/LIDAR sensors. Drawings are often collected in the field as high-quality photos showing partial views to understand the reasons behind building performance. Yeum et al. adopted computer vision methods to reconstruct high-resolution full structural drawings from many partial drawings, enabling the reconnaissance teams to use the drawings more efficiently without trying to do this tedious work manually (2019). This method is meant to complement the other methods described in Section 1 and support reconnaissance missions and subsequent research using those data. In reconnaissance missions it is possible that two or more teams will visit the same structure, possibly with the intention of collecting information and images to answer different research questions. There is value in combining those data sets into one set to get a comprehensive view of the state of the building. Choi et al. developed a way to use Siamese convolutional neural networks (S-CNN) to perform a similarity search for such cases (2022). The S-CNN uses single-shot training and is especially fast in sifting and analyzing large-scale post-disaster image datasets. The method works by automatically ranking and retrieving images related to a specific building in response to a query image, and opens the door to other opportunities to use similarity search to aid the human engineer with tedious tasks.

Made with FlippingBook Digital Proposal Maker