As large language models (LLMs) continue to scale, their enhanced performance often proves insufficient for solving domain-specific tasks. Systematically analyzing their failures and effectively enhancing their performance remain significant challenges. This paper introduces the Re-TASK framework, a novel theoretical model that Revisits LLM Tasks from cApability, Skill, Knowledge perspectives, guided by the principles of Bloom's Taxonomy and Knowledge Space Theory. The Re-TASK framework provides a systematic methodology to deepen our understanding, evaluation, and enhancement of LLMs for domain-specific tasks. It explores the interplay among an LLM's capabilities, the knowledge it processes, and the skills it applies, elucidating how these elements are interconnected and impact task performance. Our application of the Re-TASK framework reveals that many failures in domain-specific tasks can be attributed to insufficient knowledge or inadequate skill adaptation. With this insight, we propose structured strategies for enhancing LLMs through targeted knowledge injection and skill adaptation. Specifically, we identify key capability items associated with tasks and employ a deliberately designed prompting strategy to enhance task performance, thereby reducing the need for extensive fine-tuning. Alternatively, we fine-tune the LLM using capability-specific instructions, further validating the efficacy of our framework. Experimental results confirm the framework's effectiveness, demonstrating substantial improvements in both the performance and applicability of LLMs.
翻译:随着大语言模型(LLM)规模的持续扩大,其性能提升往往仍不足以解决特定领域任务。系统分析其失败原因并有效增强其性能仍是重大挑战。本文提出Re-TASK框架——一种基于布鲁姆分类学与知识空间理论指导,从能力(cApability)、技能(Skill)、知识(Knowledge)视角重新审视LLM任务的新型理论模型。该框架为深化对LLM在领域任务中的理解、评估与增强提供了系统化方法论。它探究LLM的能力、处理的知识及运用的技能三者间的相互作用,阐明这些要素如何相互关联并影响任务表现。通过应用Re-TASK框架,我们发现领域任务中的许多失败可归因于知识不足或技能适配欠缺。基于此洞见,我们提出通过定向知识注入与技能适配来增强LLM的结构化策略。具体而言,我们识别与任务关联的关键能力项,并采用精心设计的提示策略提升任务表现,从而减少对大规模微调的依赖。此外,我们还通过针对特定能力的指令对LLM进行微调,进一步验证了框架的有效性。实验结果证实了该框架的效能,表明LLM在性能与适用性方面均获得显著提升。