Large language models show potential in task-oriented dialogue systems, yet existing training methods often rely on token-level likelihood or preference optimization, which poorly align with long-horizon task success. To address this, we propose Goal-Oriented Preference Optimization (GOPO), a hierarchical reinforcement learning framework that decouples strategy planning from response generation via an Expert Agent and a Customer Service Agent. The Expert Agent optimizes multi-turn goal preferences at the dialogue-trajectory level, while the Customer Service Agent generates responses strictly aligned with the selected strategy. We evaluate GOPO on public benchmarks and e-commerce customer service datasets, and introduce Task-focused Sequential Engagement (TSE), a sequence-level metric derived from real e-commerce interaction data. On the Mgshop dataset, GOPO improves TSE by 7.7% and 10.3% over PPO and Memento, with consistent gains in sequence-level reward and generation quality. Furthermore, a 14B model trained with GOPO achieves 2.7% and 1.5% higher TSE than Qwen-235B and GPT-5.2, respectively. Ablation studies confirm the Expert Agent's critical role in long-horizon optimization. GOPO demonstrates consistent improvements across other datasets as well. This work establishes a new paradigm for task-oriented dialogue systems in commercial scenarios, with code and datasets to be made public.
翻译:大型语言模型在面向任务的对话系统中展现出潜力,但现有训练方法通常依赖于词元级似然或偏好优化,难以与长时程任务成功对齐。为解决此问题,我们提出目标导向偏好优化(GOPO),这是一种通过专家智能体与客服智能体实现策略规划与响应生成解耦的分层强化学习框架。专家智能体在对话轨迹层级优化多轮目标偏好,而客服智能体则严格遵循选定策略生成响应。我们在公开基准和电商客服数据集上评估GOPO,并引入基于真实电商交互数据衍生的序列级指标——任务聚焦序列参与度(TSE)。在Mgshop数据集上,GOPO相较PPO和Memento分别将TSE提升7.7%和10.3%,同时在序列级奖励和生成质量上取得一致增益。此外,采用GOPO训练的14B模型分别比Qwen-235B和GPT-5.2获得2.7%和1.5%的TSE提升。消融研究证实了专家智能体在长时程优化中的关键作用。GOPO在其他数据集上也表现出稳定的改进。本研究为商业场景下面向任务的对话系统建立了新范式,相关代码与数据集将公开。