The scope of neural code intelligence is rapidly expanding beyond text-based source code to encompass the rich visual outputs that programs generate. This visual dimension is critical for advanced applications like flexible content generation and precise, program-driven editing of visualizations. However, progress has been impeded by the scarcity of high-quality multimodal code data, a bottleneck stemming from challenges in synthesis and quality assessment. To address these challenges, we make contributions from both a data and modeling perspective. We first introduce a complete synthesis toolkit that leverages reciprocal synergies between data modalities to efficiently produce a large-scale, high-quality corpus spanning from standard charts to complex interactive web UIs and code-driven animations. Leveraging this toolkit, we construct JanusCode-800K, the largest multimodal code corpus to date. This powers the training of our models, JanusCoder and JanusCoderV, which establish a visual-programmatic interface for generating code from textual instructions, visual inputs, or a combination of both. Our unified model is a departure from existing approaches that build specialized models for isolated tasks. Extensive experiments on both text-centric and vision-centric coding tasks demonstrate the superior performance of the JanusCoder series, with our 7B to 14B scale models approaching or even exceeding the performance of commercial models. Furthermore, extensive analysis provides key insights into harmonizing programmatic logic with its visual expression. Our code and checkpoints will are available at https://github.com/InternLM/JanusCoder.
翻译:神经代码智能的研究范畴正迅速从基于文本的源代码扩展到程序生成的丰富视觉输出。这一视觉维度对于灵活内容生成和可视化图形精确的程序驱动编辑等高级应用至关重要。然而,高质量多模态代码数据的稀缺阻碍了进展,这一瓶颈源于数据合成与质量评估方面的挑战。为应对这些挑战,我们从数据和建模两个维度做出贡献。我们首先引入一套完整的合成工具包,该工具包利用数据模态间的互惠协同效应,高效生成涵盖从标准图表到复杂交互式网页界面及代码驱动动画的大规模高质量语料库。基于此工具包,我们构建了JanusCode-800K,这是迄今规模最大的多模态代码语料库。该语料库支撑了我们模型JanusCoder与JanusCoderV的训练,这些模型建立了一个视觉-程序化接口,能够根据文本指令、视觉输入或两者结合生成代码。我们的统一模型有别于现有为孤立任务构建专用模型的方法。在文本中心与视觉中心的编码任务上进行的大量实验表明,JanusCoder系列模型性能卓越,其7B至14B规模的模型在性能上接近甚至超越了商业模型。此外,深入分析为协调程序逻辑与其视觉表达提供了关键见解。我们的代码与模型检查点公开于 https://github.com/InternLM/JanusCoder。