The lottery ticket hypothesis posits the existence of ``winning tickets'' within a randomly initialized neural network. Do winning tickets exist for LLMs in fine-tuning scenarios? How can we find such winning tickets? In this paper, we propose KS-Lottery, a method to identify a small subset of LLM parameters highly effective in multilingual fine-tuning. Our key idea is to use Kolmogorov-Smirnov Test to analyze the distribution shift of parameters before and after fine-tuning. We further theoretically prove that KS-Lottery can find the certified winning tickets in the embedding layer, fine-tuning on the found parameters is guaranteed to perform as well as full fine-tuning. Comparing KS-Lottery with other parameter-efficient tuning algorithms on translation tasks, the experimental results show that KS-Lottery finds a much smaller set of parameters for fine-tuning while achieving the comparable performance as full fine-tuning LLM. Surprisingly, we find that fine-tuning 18 tokens' embedding of LLaMA suffices to reach the fine-tuning translation performance~\footnote{https://github.com/CONE-MT/KS-Lottery.}.
翻译:彩票假说认为,在随机初始化的神经网络中存在“中奖彩票”。那么,在微调场景下,大型语言模型(LLM)中是否存在中奖彩票?我们如何才能找到这样的中奖彩票?本文提出KS-Lottery,一种识别对多语言微调高度有效的LLM参数小子集的方法。我们的核心思想是利用Kolmogorov-Smirnov检验来分析参数在微调前后的分布偏移。我们进一步从理论上证明,KS-Lottery能够在嵌入层中找到可证明的中奖彩票,仅对这些被找到的参数进行微调,其性能保证与全参数微调相当。在翻译任务上将KS-Lottery与其他参数高效微调算法进行比较,实验结果表明,KS-Lottery找到了一个更小的参数子集进行微调,同时达到了与全参数微调LLM相当的性能。令人惊讶的是,我们发现仅微调LLaMA中18个词元的嵌入向量就足以达到微调翻译性能~\footnote{https://github.com/CONE-MT/KS-Lottery.}。