We introduce SmallTalk LM, an innovative method for training a mixture of language models in an almost asynchronous manner. Each model of the mixture specializes in distinct parts of the data distribution, without the need of high-bandwidth communication between the nodes training each model. At inference, a lightweight router directs a given sequence to a single expert, according to a short prefix. This inference scheme naturally uses a fraction of the parameters from the overall mixture model. Our experiments on language modeling demonstrate tha SmallTalk LM achieves significantly lower perplexity than dense model baselines for the same total training FLOPs and an almost identical inference cost. Finally, in our downstream evaluations we outperform the dense baseline on $75\%$ of the tasks.
翻译:我们提出了一种创新的方法SmallTalk LM,用于以近乎异步的方式训练语言模型混合体。该混合体中的每个模型专门处理数据分布的不同部分,而无需训练各个模型的节点之间进行高带宽通信。在推理时,一个轻量级路由器根据一个简短的前缀,将给定的序列定向到单个专家模型。这种推理方案自然地仅使用了整个混合模型中参数的一部分。我们在语言建模上的实验表明,在相同的总训练FLOPs和几乎相同的推理成本下,SmallTalk LM比密集模型基线实现了显著更低的困惑度。最后,在我们的下游任务评估中,我们在$75\%$的任务上超越了密集基线。