Parameter-Efficient Fine-Tuning (PEFT) methods address the increasing size of Large Language Models (LLMs). Currently, many newly introduced PEFT methods are challenging to replicate, deploy, or compare with one another. To address this, we introduce PEFT-Factory, a unified framework for efficient fine-tuning LLMs using both off-the-shelf and custom PEFT methods. While its modular design supports extensibility, it natively provides a representative set of 19 PEFT methods, 27 classification and text generation datasets addressing 12 tasks, and both standard and PEFT-specific evaluation metrics. As a result, PEFT-Factory provides a ready-to-use, controlled, and stable environment, improving replicability and benchmarking of PEFT methods. PEFT-Factory is a downstream framework that originates from the popular LLaMA-Factory, and is publicly available at https://github.com/kinit-sk/PEFT-Factory.
翻译:参数高效微调(PEFT)方法旨在应对大语言模型(LLM)日益增长的规模。当前,许多新提出的PEFT方法在复现、部署或相互比较方面存在困难。为此,我们提出了PEFT-Factory,这是一个使用现成及自定义PEFT方法对LLM进行高效微调的统一框架。其模块化设计支持扩展性,同时原生提供了19种具有代表性的PEFT方法、涵盖12个任务的27个分类与文本生成数据集,以及标准及PEFT特有的评估指标。因此,PEFT-Factory提供了一个即用型、可控且稳定的环境,提升了PEFT方法的可复现性与基准测试能力。PEFT-Factory是源自流行框架LLaMA-Factory的下游框架,已在 https://github.com/kinit-sk/PEFT-Factory 公开。