In this work, we demonstrate that small language models (SLMs), specifically a 100M parameter GPT-2 model, can achieve competitive performance in multitask prompt generation tasks while requiring only a fraction of the computational resources needed by large language models (LLMs). Through a novel combination of upside-down reinforcement learning and synthetic data distillation from a powerful LLM, Llama-3, we train an SLM that achieves relevance scores within 5% of state-of-the-art models, including Llama-3, Qwen2, and Mistral, despite being up to 80 times smaller, making it highly suitable for resource-constrained and real-time applications. This study highlights the potential of SLMs as efficient multitask learners in multimodal settings, providing a promising alternative to LLMs for scalable, low-latency deployments.
翻译:本研究证明,小型语言模型(SLMs),特别是参数量为1亿的GPT-2模型,能够在多任务提示生成任务中取得与大型语言模型(LLMs)相竞争的性能,而仅需后者计算资源的一小部分。通过创新性地结合倒置强化学习与从强大LLM(Llama-3)进行的合成数据蒸馏,我们训练出的SLM在相关性得分上达到了与包括Llama-3、Qwen2和Mistral在内的最先进模型相差5%以内的水平,尽管其模型尺寸最多可缩小80倍,这使其非常适用于资源受限和实时应用场景。本研究凸显了SLMs在多模态环境中作为高效多任务学习器的潜力,为可扩展、低延迟的部署提供了有望替代LLMs的方案。