Search and recommendation (S&R) are core to online platforms, addressing explicit intent through queries and modeling implicit intent from behaviors, respectively. Their complementary roles motivate a unified modeling paradigm. Early studies to unify S&R adopt shared encoders with task-specific heads, while recent efforts reframe item ranking in both S&R as conditional generation. The latter holds particular promise, enabling end-to-end optimization and leveraging the semantic understanding of LLMs. However, existing methods rely on full fine-tuning, which is computationally expensive and limits scalability. Parameter-efficient fine-tuning (PEFT) offers a more practical alternative but faces two critical challenges in unifying S&R: (1) gradient conflicts across tasks due to divergent optimization objectives, and (2) shifts in user intent understanding caused by overfitting to fine-tuning data, which distort general-domain knowledge and weaken LLM reasoning. To address the above issues, we propose Gradient Multi-Subspace Tuning (GEMS), a novel framework that unifies S&R with LLMs while alleviating gradient conflicts and preserving general-domain knowledge. GEMS introduces (1) \textbf{Multi-Subspace Decomposition}, which disentangles shared and task-specific optimization signals into complementary low-rank subspaces, thereby reducing destructive gradient interference, and (2) \textbf{Null-Space Projection}, which constrains parameter updates to a subspace orthogonal to the general-domain knowledge space, mitigating shifts in user intent understanding. Extensive experiments on benchmark datasets show that GEMS consistently outperforms the state-of-the-art baselines across both search and recommendation tasks, achieving superior effectiveness.
翻译:搜索与推荐是在线平台的核心功能,分别通过查询处理显式意图和基于行为建模隐式意图。两者的互补性催生了统一的建模范式。早期统一搜索与推荐的研究采用共享编码器与任务特定头部的架构,而近期研究则将两者中的物品排序任务重新定义为条件生成问题。后者展现出独特优势,可实现端到端优化并利用大语言模型的语义理解能力。然而,现有方法依赖全参数微调,计算成本高昂且可扩展性受限。参数高效微调提供了更实用的替代方案,但在统一搜索与推荐时面临两大关键挑战:(1) 因优化目标差异导致跨任务梯度冲突;(2) 对微调数据的过拟合引发用户意图理解偏移,这会扭曲通用领域知识并削弱大语言模型的推理能力。为解决上述问题,我们提出梯度多子空间调优——一种通过缓解梯度冲突并保持通用领域知识来实现大语言模型统一搜索与推荐的新框架。该框架引入:(1) **多子空间分解**,将共享与任务特定的优化信号解耦至互补的低秩子空间,从而减少破坏性梯度干扰;(2) **零空间投影**,将参数更新约束在与通用领域知识空间正交的子空间内,以缓解用户意图理解的偏移。在基准数据集上的大量实验表明,梯度多子空间调优在搜索与推荐任务中持续优于现有最先进基线,实现了卓越的性能表现。