Reliable operation of wind turbines requires frequent inspections, as even minor surface damages can degrade aerodynamic performance, reduce energy output, and accelerate blade wear. Central to automating these inspections is the accurate segmentation of turbine blades from visual data. This task is traditionally addressed through dense, pixel-wise deep learning models. However, such methods demand extensive annotated datasets, posing scalability challenges. In this work, we introduce an annotation-efficient segmentation approach that reframes the pixel-level task into a binary region classification problem. Image regions are generated using a fully unsupervised, interpretable Modular Adaptive Region Growing technique, guided by image-specific Adaptive Thresholding and enhanced by a Region Merging process that consolidates fragmented areas into coherent segments. To improve generalization and classification robustness, we introduce RegionMix, an augmentation strategy that synthesizes new training samples by combining distinct regions. Our framework demonstrates state-of-the-art segmentation accuracy and strong cross-site generalization by consistently segmenting turbine blades across distinct windfarms.
翻译:风力涡轮机的可靠运行需要频繁检测,因为即使微小的表面损伤也会降低气动性能、减少能量输出并加速叶片磨损。实现这些检测自动化的核心在于从视觉数据中准确分割涡轮叶片。传统方法通常采用密集的像素级深度学习模型处理该任务。然而,此类方法需要大量标注数据集,存在可扩展性挑战。本研究提出一种标注高效的分割方法,将像素级任务重构为二值区域分类问题。图像区域通过完全无监督、可解释的模块化自适应区域生长技术生成,该方法以图像自适应阈值化为指导,并通过区域合并过程将碎片化区域整合为连贯片段。为提升泛化能力和分类鲁棒性,我们提出RegionMix增强策略,通过组合不同区域合成新的训练样本。该框架在多个不同风电场中实现了稳定的涡轮叶片分割,展现出最先进的分割精度和强大的跨站点泛化能力。