We present Cephalo, a series of multimodal vision large language models (V-LLMs) designed for materials science applications, integrating visual and linguistic data for enhanced understanding. A key innovation of Cephalo is its advanced dataset generation method. Cephalo is trained on integrated image and text data from thousands of scientific papers and science-focused Wikipedia data demonstrates can interpret complex visual scenes, generate precise language descriptions, and answer queries about images effectively. The combination of a vision encoder with an autoregressive transformer supports multimodal natural language understanding, which can be coupled with other generative methods to create an image-to-text-to-3D pipeline. To develop more capable models from smaller ones, we report both mixture-of-expert methods and model merging. We examine the models in diverse use cases that incorporate biological materials, fracture and engineering analysis, protein biophysics, and bio-inspired design based on insect behavior. Generative applications include bio-inspired designs, including pollen-inspired architected materials, as well as the synthesis of bio-inspired material microstructures from a photograph of a solar eclipse. Additional model fine-tuning with a series of molecular dynamics results demonstrate Cephalo's enhanced capabilities to accurately predict statistical features of stress and atomic energy distributions, as well as crack dynamics and damage in materials.
翻译:我们提出了Cephalo系列多元视觉大语言模型(V-LLMs),专为材料科学应用设计,通过整合视觉与语言数据以提升理解能力。Cephalo的核心创新在于其先进的数据集生成方法。该模型基于数千篇科学论文及科学维基百科数据中的图文整合数据进行训练,能够解析复杂视觉场景、生成精准语言描述,并有效回答关于图像的查询。视觉编码器与自回归Transformer的结合实现了多元自然语言理解,该框架可与其他生成方法结合构建图像-文本-3D生成流程。为从较小模型开发更高性能模型,我们同时报告了专家混合方法与模型融合技术。我们在涵盖生物材料、断裂与工程分析、蛋白质生物物理以及基于昆虫行为的仿生设计等多类应用场景中对模型进行了验证。生成式应用包括仿生设计(如花粉启发架构材料)以及基于日食照片合成仿生材料微结构。通过结合分子动力学结果进行系列微调,进一步证明了Cephalo在准确预测应力统计特征、原子能量分布以及材料裂纹动态与损伤方面的增强能力。