This study aims to explore the performance improvement method of large language models based on GPT-4 under the multi-task learning framework and conducts experiments on two tasks: text classification and automatic summary generation. Through the combined design of shared feature extractors and task-specific modules, we achieve knowledge-sharing and optimization of multiple tasks in the same model. The experiment uses multiple subtasks of the GLUE dataset to compare the performance of the multi-task model with the single-task GPT-4, the multi-task version of GPT-3, the BERT basic model, and the classic Bi-LSTM with Attention model. The results show that the proposed multi-task learning model outperforms other comparison models in terms of text classification accuracy and ROUGE value of summary generation, demonstrating the advantages of multi-task learning in improving model generalization ability and collaborative learning between tasks. The model maintains a stable loss convergence rate during training, showing good learning efficiency and adaptability to the test set. This study verifies the applicability of the multi-task learning framework in large language models, especially in improving the model's ability to balance different tasks. In the future, with the combination of large language models and multimodal data and the application of dynamic task adjustment technology, the framework based on multi-task learning is expected to play a greater role in practical applications across fields and provide new ideas for the development of general artificial intelligence.
翻译:本研究旨在探索基于GPT-4的大型语言模型在多任务学习框架下的性能提升方法,并在文本分类与自动摘要生成两项任务上开展实验。通过共享特征提取器与任务特定模块的组合设计,我们实现了同一模型中多任务的知识共享与优化。实验采用GLUE数据集的多个子任务,对比了多任务模型与单任务GPT-4、GPT-3多任务版本、BERT基础模型以及经典Bi-LSTM with Attention模型的性能。结果表明,所提出的多任务学习模型在文本分类准确率与摘要生成ROUGE值方面均优于其他对比模型,验证了多任务学习在提升模型泛化能力与任务间协同学习方面的优势。该模型在训练过程中保持稳定的损失收敛速率,展现出良好的学习效率及对测试集的适应能力。本研究验证了多任务学习框架在大型语言模型中的适用性,特别是在提升模型平衡不同任务能力方面具有显著效果。未来,随着大型语言模型与多模态数据的结合以及动态任务调整技术的应用,基于多任务学习的框架有望在跨领域实际应用中发挥更大作用,并为通用人工智能的发展提供新思路。