Large Language Models (LLMs) have demonstrated a powerful ability for text generation. However, achieving optimal results with a given prompt or instruction can be challenging, especially for billion-sized models. Additionally, undesired behaviors such as toxicity or hallucinations can manifest. While much larger models (e.g., ChatGPT) may demonstrate strength in mitigating these issues, there is still no guarantee of complete prevention. In this work, we propose formalizing text generation as a future-constrained generation problem to minimize undesirable behaviors and enforce faithfulness to instructions. The estimation of future constraint satisfaction, accomplished using LLMs, guides the text generation process. Our extensive experiments demonstrate the effectiveness of the proposed approach across three distinct text generation tasks: keyword-constrained generation (Lin et al., 2020), toxicity reduction (Gehman et al., 2020), and factual correctness in question-answering (Gao et al., 2023).
翻译:大语言模型(LLMs)已展现出强大的文本生成能力。然而,对于给定的提示或指令,要获得最优结果仍具挑战性,尤其对于数十亿参数规模的模型而言。此外,可能产生毒性内容或幻觉等不良行为。虽然更大型的模型(如ChatGPT)可能在缓解这些问题上表现出优势,但仍无法保证完全避免。在本研究中,我们提出将文本生成形式化为一个未来约束生成问题,以最小化不良行为并确保对指令的忠实性。通过利用LLMs估计未来约束的满足情况,以此指导文本生成过程。我们的大量实验证明了所提方法在三个不同文本生成任务中的有效性:关键词约束生成(Lin等人,2020)、毒性内容削减(Gehman等人,2020)以及问答任务中的事实准确性(Gao等人,2023)。