Tables play a crucial role in conveying information in various domains. We propose a Plan-then-Reason framework to answer different types of user queries over tables with sentence context. The framework first plans the reasoning paths over the context, then assigns each step to program-based or textual reasoning to reach the final answer. This framework enhances the table reasoning abilities for both in-context learning and fine-tuning methods. GPT-3.5-Turbo following Plan-then-Reason framework surpasses other prompting baselines without self-consistency while using less API calls and in-context demonstrations. We also construct an instruction tuning set TrixInstruct to evaluate the effectiveness of fine-tuning with this framework. We present ProTrix model family by finetuning models on TrixInstruct. Our experiments show that ProTrix family generalizes to diverse unseen tabular tasks with only 6k training instances. We further demonstrate that ProTrix can generate accurate and faithful explanations to answer complex free-form questions. Our work underscores the importance of the planning and reasoning abilities towards a model over tabular tasks with generalizability and interpretability. We open-source our dataset and models at https://github.com/WilliamZR/ProTrix.
翻译:表格在各领域信息传递中扮演着关键角色。本文提出一种"先规划后推理"框架,用于回答面向含句子上下文表格的各类用户查询。该框架首先规划基于上下文的推理路径,随后为每个步骤分配基于程序的推理或文本推理以得出最终答案。此框架同时提升了上下文学习与微调方法在表格推理任务中的能力。遵循"先规划后推理"框架的GPT-3.5-Turbo模型在减少API调用与上下文示例数量的前提下,无需自一致性机制即超越其他提示基准方法。我们还构建了指令微调数据集TrixInstruct以评估该框架在微调场景中的有效性。通过在TrixInstruct上微调模型,我们提出了ProTrix模型系列。实验表明,ProTrix系列仅需6千训练实例即可泛化至多种未见表格任务。我们进一步证明ProTrix能够生成精确可靠的解释以回答复杂的自由形式问题。本研究揭示了规划与推理能力对于构建兼具泛化性与可解释性的表格任务模型的重要性。相关数据集与模型已在https://github.com/WilliamZR/ProTrix开源。