Numerous recent techniques for text style transfer characterize their approaches as variants of reinforcement learning and preference optimization. In this work, we consider the relationship between these approaches and a class of optimization approaches developed primarily for (non-neural) statistical machine translation, formerly known as 'tuning'. Inspired by these techniques from the past, we improve upon established preference optimization approaches, incorporating multiple iterations of exploration and optimization, and choosing contrastive examples by following a 'hope' vs 'fear' sampling strategy. Cognizant of the difference between machine translation and style transfer, however, we further tailor our framework with a new pseudo-parallel generation method and a dynamic weighted reward aggregation method to tackle the lack of parallel data and the need for a multi-objective reward. We evaluate our model on two commonly used text style transfer datasets. Through automatic and human evaluation results we show the effectiveness and the superiority of our model compared to state-of-the-art baselines.
翻译:近年来,众多文本风格迁移技术将其方法描述为强化学习与偏好优化的变体。本文探讨了这些方法与一类主要针对(非神经)统计机器翻译发展而来的优化方法(曾被称为“调优”)之间的关系。受这些历史技术的启发,我们改进了现有的偏好优化方法,引入了多轮探索与优化机制,并采用“希望”与“恐惧”对比采样策略来选择对比样本。然而,考虑到机器翻译与风格迁移之间的差异,我们进一步通过一种新的伪平行生成方法和动态加权奖励聚合方法,针对并行数据缺失和多目标奖励需求的问题,对本框架进行了专门化设计。我们在两个常用的文本风格迁移数据集上评估了所提模型。自动评估与人工评估结果均表明,相较于现有先进基线模型,我们的方法具有显著的有效性与优越性。