Geometric Dimensioning and Tolerancing (GD&T) plays a critical role in manufacturing by defining acceptable variations in part features to ensure component quality and functionality. However, extracting GD&T information from 2D engineering drawings is a time-consuming and labor-intensive task, often relying on manual efforts or semi-automated tools. To address these challenges, this study proposes an automated and computationally efficient GD&T extraction method by fine-tuning Florence-2, an open-source vision-language model (VLM). The model is trained on a dataset of 400 drawings with ground truth annotations provided by domain experts. For comparison, two state-of-the-art closed-source VLMs, GPT-4o and Claude-3.5-Sonnet, are evaluated on the same dataset. All models are assessed using precision, recall, F1-score, and hallucination metrics. Due to the computational cost and impracticality of fine-tuning large closed-source VLMs for domain-specific tasks, GPT-4o and Claude-3.5-Sonnet are evaluated in a zero-shot setting. In contrast, Florence-2, a smaller model with 0.23 billion parameters, is optimized through full-parameter fine-tuning across three distinct experiments, each utilizing datasets augmented to different levels. The results show that Florence-2 achieves a 29.95% increase in precision, a 37.75% increase in recall, a 52.40% improvement in F1-score, and a 43.15% reduction in hallucination rate compared to the best-performing closed-source model. These findings highlight the effectiveness of fine-tuning smaller, open-source VLMs like Florence-2, offering a practical and efficient solution for automated GD&T extraction to support downstream manufacturing tasks.
翻译:几何尺寸与公差(GD&T)通过定义零件特征的允许变动范围,在制造过程中对确保部件质量与功能起着至关重要的作用。然而,从二维工程图纸中提取GD&T信息是一项耗时且劳动密集型的任务,通常依赖于人工或半自动化工具。为应对这些挑战,本研究提出一种自动化且计算高效的GD&T提取方法,通过对开源视觉语言模型(VLM)Florence-2进行微调实现。该模型在包含400张图纸的数据集上进行训练,数据集中包含由领域专家提供的真实标注。为进行比较,本研究还在同一数据集上评估了两种最先进的闭源VLM:GPT-4o和Claude-3.5-Sonnet。所有模型均采用精确率、召回率、F1分数和幻觉率指标进行评估。鉴于对大型闭源VLM进行领域特定任务微调的计算成本和不切实际性,GPT-4o和Claude-3.5-Sonnet在零样本设置下进行评估。相比之下,参数量为2.3亿的较小模型Florence-2通过三个独立实验进行了全参数微调优化,每个实验使用了不同增强程度的数据集。结果显示,与性能最佳的闭源模型相比,Florence-2的精确率提高了29.95%,召回率提高了37.75%,F1分数提升了52.40%,幻觉率降低了43.15%。这些发现凸显了对Florence-2等小型开源VLM进行微调的有效性,为支持下游制造任务的自动化GD&T提取提供了一种实用且高效的解决方案。