Large Language Models (LLMs) have demonstrated remarkable capabilities in handling long texts and have almost perfect performance in traditional retrieval tasks. However, their performance significantly degrades when it comes to numerical calculations in the long-context. Numeric-involved long-context tasks typically cannot be addressed by current LLMs in normal settings due to their inherent limitations in simultaneously handling complex and massive information. Some CoT like prompting methods can improve accuracy but demands massive output tokens, which is costly and slow. To address this issue, we propose a workflow, which decompose a numeric-involved long-context task into 4 low-level subtasks: judging, extracting and processing with code and conclusion. The former 2 subtasks is relatively simple, which allows us to use smaller models for efficiently processing long context. When numerical calculations are required, we use code generated by LLMs to avoid the disadvantage of LLM not being good at calculations. The results in 2 numeric-involved long-context benchmarks demonstrate our workflow can not only improve accuracy, but also significantly reduce the cost of API calls.
翻译:大语言模型在处理长文本方面展现出卓越能力,在传统检索任务中几乎具备完美表现。然而,当涉及长上下文中的数值计算时,其性能显著下降。由于大语言模型固有的、难以同时处理复杂海量信息的局限性,当前常规设置下的大语言模型通常无法应对涉及数值的长上下文任务。某些类似思维链的提示方法虽能提升准确性,但需要大量输出标记,导致成本高昂且速度缓慢。为解决此问题,我们提出一种工作流程,将涉及数值的长上下文任务分解为四个底层子任务:判断、提取、代码处理与结论生成。前两个子任务相对简单,使我们能够使用更小的模型高效处理长上下文。当需要进行数值计算时,我们利用大语言模型生成的代码来规避大语言模型不擅长计算的缺陷。在两个涉及数值的长上下文基准测试中的结果表明,我们的工作流程不仅能提升准确性,还能显著降低API调用的成本。