Today's large language models (LLMs) can solve challenging question-answering tasks, and prompt engineering techniques, such as chain-of-thought (CoT), have gained attention for enhancing the explanation and correctness of outputs. However, many models and techniques tend to produce excessively verbose and lengthy answers, leading to issues with both conciseness and generation time. To address this, this paper analyzes the impact of output lengths on LLM inference pipelines by introducing and proposing novel metrics to evaluate the \textit{correct conciseness} of a model and related prompting techniques. Then, we examine the impact of controlling output length through a refined prompt engineering strategy, Constrained-CoT (CCoT), which encourages the model to produce more concise outputs. To better understand the effects of such a prompt, we also introduce two additional scores for analyzing the conciseness, measured in terms of redundancy and information flow in generated answers. Experiments on pretrained LLMs and multiple datasets demonstrate the benefits of the proposed metrics and the effectiveness of CCoT across different models.
翻译:当前的大型语言模型(LLM)能够解决具有挑战性的问答任务,而诸如思维链(CoT)等提示工程技术因其能增强输出的解释性与正确性而备受关注。然而,许多模型与技术倾向于生成过于冗长和啰嗦的答案,这导致了简洁性和生成时间两方面的问题。为解决此问题,本文通过引入并提出新的评估指标来分析输出长度对LLM推理流程的影响,这些指标用于评估模型及相关提示技术的\textit{正确简洁性}。随后,我们通过一种改进的提示工程策略——约束思维链(CCoT)——来研究控制输出长度的影响,该策略鼓励模型生成更简洁的输出。为了更好地理解此类提示的效果,我们还引入了两个额外的评分指标,用于从生成答案的冗余度和信息流角度分析简洁性。在预训练LLM和多个数据集上的实验证明了所提指标的优越性以及CCoT在不同模型间的有效性。