We introduce SldprtNet, a large-scale dataset comprising over 242,000 industrial parts, designed for semantic-driven CAD modeling, geometric deep learning, and the training and fine-tuning of multimodal models for 3D design. The dataset provides 3D models in both .step and .sldprt formats to support diverse training and testing. To enable parametric modeling and facilitate dataset scalability, we developed supporting tools, an encoder and a decoder, which support 13 types of CAD commands and enable lossless transformation between 3D models and a structured text representation. Additionally, each sample is paired with a composite image created by merging seven rendered views from different viewpoints of the 3D model, effectively reducing input token length and accelerating inference. By combining this image with the parameterized text output from the encoder, we employ the lightweight multimodal language model Qwen2.5-VL-7B to generate a natural language description of each part's appearance and functionality. To ensure accuracy, we manually verified and aligned the generated descriptions, rendered images, and 3D models. These descriptions, along with the parameterized modeling scripts, rendered images, and 3D model files, are fully aligned to construct SldprtNet. To assess its effectiveness, we fine-tuned baseline models on a dataset subset, comparing image-plus-text inputs with text-only inputs. Results confirm the necessity and value of multimodal datasets for CAD generation. It features carefully selected real-world industrial parts, supporting tools for scalable dataset expansion, diverse modalities, and ensured diversity in model complexity and geometric features, making it a comprehensive multimodal dataset built for semantic-driven CAD modeling and cross-modal learning.
翻译:我们介绍了SldprtNet,这是一个包含超过24.2万个工业零件的大规模数据集,专为语义驱动的CAD建模、几何深度学习以及三维设计多模态模型的训练与微调而设计。该数据集提供.step和.sldprt两种格式的三维模型,以支持多样化的训练与测试。为实现参数化建模并促进数据集的可扩展性,我们开发了配套工具——编码器与解码器,它们支持13种CAD命令类型,并能在三维模型与结构化文本表示之间实现无损转换。此外,每个样本均配有一幅合成图像,该图像通过融合三维模型七个不同视角的渲染视图生成,有效缩短了输入标记长度并加速了推理过程。通过将此图像与编码器输出的参数化文本相结合,我们采用轻量级多模态语言模型Qwen2.5-VL-7B来生成描述每个零件外观与功能的自然语言文本。为确保准确性,我们对生成的描述、渲染图像及三维模型进行了人工验证与对齐。这些描述与参数化建模脚本、渲染图像、三维模型文件完全对齐,共同构建了SldprtNet。为评估其有效性,我们在数据集子集上对基线模型进行了微调,比较了“图像+文本”输入与纯文本输入的差异。结果证实了多模态数据集对于CAD生成的必要性与价值。该数据集具有精心筛选的真实工业零件、支持可扩展数据集构建的配套工具、丰富的模态类型,并确保了模型复杂度与几何特征的多样性,使其成为一个为语义驱动CAD建模与跨模态学习构建的综合性多模态数据集。