The complexities of table structures and question logic make table-based question answering (TQA) tasks challenging for Large Language Models (LLMs), often requiring task simplification before solving. This paper reveals that the reasoning process during task simplification may be more valuable than the simplified tasks themselves and aims to improve TQA performance by leveraging LLMs' reasoning capabilities. We propose a Seek-and-Solve pipeline that instructs the LLM to first seek relevant information and then answer questions, integrating these two stages at the reasoning level into a coherent Seek-and-Solve Chain of Thought (SS-CoT). Additionally, we distill a single-step TQA-solving prompt from this pipeline, using demonstrations with SS-CoT paths to guide the LLM in solving complex TQA tasks under In-Context Learning settings. Our experiments show that our approaches result in improved performance and reliability while being efficient. Our findings emphasize the importance of eliciting LLMs' reasoning capabilities to handle complex TQA tasks effectively.
翻译:表格结构的复杂性与问题逻辑的多样性使得基于表格的问答任务对大型语言模型构成挑战,通常需要在求解前对任务进行简化。本文揭示,任务简化过程中的推理可能比简化后的任务本身更具价值,并旨在通过利用大型语言模型的推理能力提升表格问答性能。我们提出一种寻求与求解流程,指导大型语言模型先寻求相关信息再回答问题,将这两个阶段在推理层面整合为连贯的寻求与求解思维链。此外,我们从该流程中提炼出单步表格问答求解提示,利用带有寻求与求解思维链路径的示例,指导大型语言模型在上下文学习设置下求解复杂表格问答任务。实验表明,我们的方法在保证高效性的同时提升了性能与可靠性。研究结果强调了激发大型语言模型推理能力以有效处理复杂表格问答任务的重要性。