The detection and classification of exfoliated two-dimensional (2D) material flakes from optical microscope images can be automated using computer vision algorithms. This has the potential to increase the accuracy and objectivity of classification and the efficiency of sample fabrication, and it allows for large-scale data collection. Existing algorithms often exhibit challenges in identifying low-contrast materials and typically require large amounts of training data. Here, we present a deep learning model, called MaskTerial, that uses an instance segmentation network to reliably identify 2D material flakes. The model is extensively pre-trained using a synthetic data generator, that generates realistic microscopy images from unlabeled data. This results in a model that can to quickly adapt to new materials with as little as 5 to 10 images. Furthermore, an uncertainty estimation model is used to finally classify the predictions based on optical contrast. We evaluate our method on eight different datasets comprising five different 2D materials and demonstrate significant improvements over existing techniques in the detection of low-contrast materials such as hexagonal boron nitride.
翻译:利用计算机视觉算法,可以从光学显微镜图像中自动检测和分类剥离的二维材料薄片。这有望提高分类的准确性和客观性、提升样品制备效率,并支持大规模数据收集。现有算法在识别低对比度材料时常常面临挑战,并且通常需要大量训练数据。本文提出了一种名为MaskTerial的深度学习模型,该模型采用实例分割网络来可靠地识别二维材料薄片。该模型通过合成数据生成器进行了广泛预训练,该生成器能够从未标记数据中生成逼真的显微图像。这使得模型能够仅用5到10张图像即可快速适应新材料。此外,我们采用了一个不确定性估计模型,根据光学对比度对预测结果进行最终分类。我们在包含五种不同二维材料的八个数据集上评估了我们的方法,结果表明,在六方氮化硼等低对比度材料的检测方面,本方法相较于现有技术有显著提升。