We introduce open-sci-ref, a family of dense transformer models trained as research baselines across multiple model (0.13B to 1.7B parameters) and token scales (up to 1T) on 8 recent open reference datasets. Evaluating the models on various standardized benchmarks, our training runs set establishes reference points that enable researchers to assess the sanity and quality of alternative training approaches across scales and datasets. Intermediate checkpoints allow comparison and studying of the training dynamics. The established reference baselines allow training procedures to be compared through their scaling trends, aligning them on a common compute axis. Comparison of open reference datasets reveals that training on NemoTron-CC HQ consistently outperforms other reference datasets, followed by DCLM-baseline and FineWeb-Edu. In addition to intermediate training checkpoints, the release includes logs, code, and downstream evaluations to simplify reproduction, standardize comparison, and facilitate future research.
翻译:我们介绍了open-sci-ref系列模型,这是一族密集Transformer模型,在8个近期开源的参考数据集上,跨越多个模型规模(0.13B至1.7B参数)和词元规模(最高达1T)进行训练,作为研究基线。通过在各类标准化基准上评估这些模型,我们的训练过程建立了一系列参考点,使研究人员能够评估不同规模和数据集上替代性训练方法的合理性与质量。中间检查点允许对训练动态进行比较和研究。所建立的参考基线使得可以通过其扩展趋势来比较不同的训练流程,并将它们置于统一的计算轴上进行比较。对开源参考数据集的比较表明,在NemoTron-CC HQ数据集上进行训练始终优于其他参考数据集,其次是DCLM-baseline和FineWeb-Edu。除了中间训练检查点,本次发布还包括训练日志、代码和下游评估结果,旨在简化复现过程、标准化比较,并促进未来的研究。