Structural health monitoring (SHM) has experienced significant advancements in recent decades, accumulating massive monitoring data. Data anomalies inevitably exist in monitoring data, posing significant challenges to their effective utilization. Recently, deep learning has emerged as an efficient and effective approach for anomaly detection in bridge SHM. Despite its progress, many deep learning models require large amounts of labeled data for training. The process of labeling data, however, is labor-intensive, time-consuming, and often impractical for large-scale SHM datasets. To address these challenges, this work explores the use of self-supervised learning (SSL), an emerging paradigm that combines unsupervised pre-training and supervised fine-tuning. The SSL-based framework aims to learn from only a very small quantity of labeled data by fine-tuning, while making the best use of the vast amount of unlabeled SHM data by pre-training. Mainstream SSL methods are compared and validated on the SHM data of two in-service bridges. Comparative analysis demonstrates that SSL techniques boost data anomaly detection performance, achieving increased F1 scores compared to conventional supervised training, especially given a very limited amount of labeled data. This work manifests the effectiveness and superiority of SSL techniques on large-scale SHM data, providing an efficient tool for preliminary anomaly detection with scarce label information.
翻译:桥梁健康监测(SHM)技术在近几十年取得了显著进展,积累了海量监测数据。监测数据中不可避免地存在异常数据,这对数据的有效利用构成了重大挑战。近年来,深度学习已成为桥梁SHM异常检测的高效方法。尽管取得了进展,但许多深度学习模型需要大量标注数据进行训练。然而,数据标注过程劳动密集、耗时且对于大规模SHM数据集通常不切实际。为应对这些挑战,本研究探索了自监督学习(SSL)的应用——这是一种结合无监督预训练与有监督微调的新兴范式。基于SSL的框架旨在通过微调仅利用极少量的标注数据进行学习,同时通过预训练充分利用海量未标注SHM数据。研究在两条在役桥梁的SHM数据上对主流SSL方法进行了比较验证。对比分析表明,SSL技术显著提升了数据异常检测性能,与传统有监督训练相比获得了更高的F1分数,特别是在标注数据量极其有限的情况下。本研究证实了SSL技术在大规模SHM数据上的有效性和优越性,为标签信息稀缺条件下的初步异常检测提供了高效工具。