Extracting high-fidelity 2D contours from Scanning Electron Microscope (SEM) images is critical for calibrating Optical Proximity Correction (OPC) models. While foundation models like Segment Anything 2 (SAM2) are promising, adapting them to specialized domains with scarce annotated data is a major challenge. This paper presents a case study on adapting SAM2 for SEM contour extraction in a few-shot setting. We propose SegSEM, a framework built on two principles: a data-efficient fine-tuning strategy that adapts by selectively training only the model's encoders, and a robust hybrid architecture integrating a traditional algorithm as a confidence-aware fallback. Using a small dataset of 60 production images, our experiments demonstrate this methodology's viability. The primary contribution is a methodology for leveraging foundation models in data-constrained industrial applications.
翻译:从扫描电子显微镜(SEM)图像中提取高保真度的二维轮廓对于校准光学邻近效应校正(OPC)模型至关重要。尽管像Segment Anything 2(SAM2)这样的基础模型前景广阔,但在标注数据稀缺的专业领域中对其进行适配仍是一个重大挑战。本文提出了一个在少样本设置下将SAM2适配用于SEM轮廓提取的案例研究。我们提出了SegSEM框架,该框架基于两个原则构建:一种数据高效的精调策略,通过选择性地仅训练模型的编码器进行适配;以及一个稳健的混合架构,集成了传统算法作为置信度感知的备用方案。使用一个包含60张生产图像的小型数据集,我们的实验证明了该方法的可行性。主要贡献在于提出了一种在数据受限的工业应用中利用基础模型的方法论。