Fine-tuning LLMs with first-order methods like back-propagation is computationally intensive. Zeroth-Order (ZO) optimisation, using function evaluations instead of gradients, reduces memory usage but suffers from slow convergence in high-dimensional models. As a result, ZO research in LLMs has mostly focused on classification, overlooking more complex generative tasks. In this paper, we introduce ZOPrO, a novel ZO algorithm designed for \textit{Preference Optimisation} in LLMs. We begin by analysing the interplay between policy and reward models during traditional (first-order) Preference Optimisation, uncovering patterns in their relative updates. Guided by these insights, we adapt Simultaneous Perturbation Stochastic Approximation (SPSA) with a targeted sampling strategy to accelerate convergence. Through experiments on summarisation, machine translation, and conversational assistants, we demonstrate that our method consistently enhances reward signals while achieving convergence times comparable to first-order methods. While it falls short of some state-of-the-art methods, our work is the first to apply Zeroth-Order methods to Preference Optimisation in LLMs, going beyond classification tasks and paving the way for a largely unexplored research direction. Code and visualisations are available at https://github.com/alessioGalatolo/VisZOPrO
翻译:使用反向传播等一阶方法对大语言模型进行微调计算成本高昂。零阶优化利用函数评估而非梯度,降低了内存占用,但在高维模型中收敛缓慢。因此,大语言模型中的零阶研究主要集中于分类任务,忽略了更复杂的生成任务。本文提出ZOPrO,一种专为大语言模型\textit{偏好优化}设计的新型零阶算法。我们首先分析了传统(一阶)偏好优化过程中策略模型与奖励模型的相互作用,揭示了其相对更新模式。基于这些发现,我们采用具有针对性采样策略的同步扰动随机逼近算法来加速收敛。通过在文本摘要、机器翻译和对话助手任务上的实验,我们证明该方法能持续提升奖励信号,同时达到与一阶方法相当的收敛速度。尽管其性能尚不及某些最先进方法,但我们的工作首次将零阶方法应用于大语言模型的偏好优化,超越了分类任务的范畴,为一个尚未充分探索的研究方向开辟了道路。代码与可视化结果发布于 https://github.com/alessioGalatolo/VisZOPrO。