Adaptability has been regarded as a central feature in the foundation models, enabling them to effectively acclimate to unseen downstream tasks. Parameter-efficient fine-tuning methods such as celebrated LoRA facilitate efficient adaptation of large foundation models using labeled, high-quality and generally scarce task data. To mitigate data scarcity in fine-tuning of foundation models, we propose to leverage task similarity across multiple downstream users. Intuitively, users with similar tasks must be able to assist each other in boosting the effective fine-tuning data size. We propose Collaborative Low-Rank Adaptation, or CoLoRA, which exploits task similarity to collaboratively and efficiently fine-tune personalized foundation models. The main idea in CoLoRA is to train one shared adapter capturing underlying task similarities across all tasks, and personalized adapters tailored to user-specific tasks. We theoretically study CoLoRA on heterogeneous linear regression and provide provable guarantees for ground truth recovery. We also conduct several natural language experiments with varying task similarity, which further demonstrate that when trained together with similar tasks, individual performances are significantly boosted.
翻译:适应性被视为基础模型的核心特征,使其能够有效适应未见的下游任务。参数高效微调方法(如著名的LoRA)利用标注、高质量且通常稀缺的任务数据,促进了大型基础模型的高效适配。为缓解基础模型微调中的数据稀缺问题,我们提出利用多个下游用户间的任务相似性。直观而言,具有相似任务的用户应能通过提升有效微调数据规模来相互协助。我们提出协作式低秩适配(CoLoRA),该方法利用任务相似性以协作方式高效微调个性化基础模型。CoLoRA的核心思想是训练一个共享适配器以捕捉所有任务间的底层相似性,同时训练针对用户特定任务的个性化适配器。我们在异构线性回归问题上对CoLoRA进行理论分析,并为真实参数恢复提供了可证明的保证。我们还通过具有不同任务相似性的自然语言实验进行验证,结果进一步表明:当与相似任务共同训练时,个体性能得到显著提升。