In the industry, numerous tasks are deployed online. Traditional approaches often tackle each task separately by its own network, which leads to excessive costs for developing and scaling models, especially in the context of large language models. Although multi-task methods can save costs through parameter sharing, they often struggle to outperform single-task methods in real-world applications. To tackle these challenges, we present a three-stage multi-task learning framework for large language models. It involves task filtering, followed by fine-tuning on high-resource tasks, and finally fine-tuning on all tasks. We conducted comprehensive experiments in single-task and multi-task settings. Our approach, exemplified on different benchmarks, demonstrates that it is able to achieve performance comparable to the single-task method while reducing up to 90.9\% of its overhead.
翻译:在工业界,大量任务需要在线部署。传统方法通常针对每个任务单独部署其专用网络,这导致模型开发和扩展成本过高,在大型语言模型场景下尤为突出。尽管多任务学习方法可通过参数共享降低成本,但在实际应用中往往难以超越单任务方法的性能。为应对这些挑战,我们提出了一个面向大型语言模型的三阶段多任务学习框架。该框架包含任务筛选、高资源任务微调以及全任务微调三个阶段。我们在单任务与多任务设定下进行了全面实验。在不同基准测试中的实验表明,我们的方法能够实现与单任务方法相当的性能,同时减少高达90.9%的开销。