In the industry, numerous tasks are deployed online. Traditional approaches often tackle each task separately by its own network, which leads to excessive costs for developing and scaling models, especially in the context of large language models. Although multi-task methods can save costs through parameter sharing, they often struggle to outperform single-task methods in real-world applications. To tackle these challenges, we present a three-stage multi-task learning framework for large language models. It involves task filtering, followed by fine-tuning on high-resource tasks, and finally fine-tuning on all tasks. We conducted comprehensive experiments in single-task and multi-task settings. Our approach, exemplified on different benchmarks, demonstrates that it is able to achieve performance comparable to the single-task method while reducing up to 90.9\% of its overhead.
翻译:在工业界,大量任务需要在线部署。传统方法通常针对每个任务分别构建独立网络,这导致模型开发和扩展成本过高,尤其在大语言模型背景下。尽管多任务方法可通过参数共享降低成本,但在实际应用中往往难以超越单任务方法的性能。为应对这些挑战,我们提出一种面向大语言模型的三阶段多任务学习框架:首先进行任务筛选,随后对高资源任务进行微调,最后对所有任务进行微调。我们在单任务与多任务设定下进行了全面实验。基于不同基准测试的示例表明,该方法在减少高达90.9%开销的同时,能够获得与单任务方法相当的性能表现。