Question answering (QA) over tables and text has gained much popularity over the years. Multi-hop table-text QA requires multiple hops between the table and text, making it a challenging QA task. Although several works have attempted to solve the table-text QA task, most involve training the models and requiring labeled data. In this paper, we have proposed a model - TTQA-RS: A break-down prompting approach for Multi-hop Table-Text Question Answering with Reasoning and Summarization. Our model uses augmented knowledge including table-text summary with decomposed sub-question with answer for a reasoning-based table-text QA. Using open-source language models our model outperformed all existing prompting methods for table-text QA tasks on existing table-text QA datasets like HybridQA and OTT-QA's development set. Our results are comparable with the training-based state-of-the-art models, demonstrating the potential of prompt-based approaches using open-source LLMs. Additionally, by using GPT-4 with LLaMA3-70B, our model achieved state-of-the-art performance for prompting-based methods on multi-hop table-text QA.
翻译:表格与文本问答近年来备受关注。多跳表格-文本问答需要在表格与文本间进行多次信息跳转,是一项极具挑战性的问答任务。尽管已有若干研究尝试解决表格-文本问答任务,但大多涉及模型训练且需要标注数据。本文提出一种模型——TTQA-RS:一种基于分解提示的多跳表格-文本问答推理与摘要方法。该模型利用增强知识(包括表格-文本摘要、分解子问题及其答案)实现基于推理的表格-文本问答。借助开源语言模型,我们的模型在现有表格-文本问答数据集(如HybridQA和OTT-QA开发集)上超越了所有现有的表格-文本问答提示方法。实验结果与基于训练的最先进模型相当,证明了使用开源大语言模型的提示方法具有巨大潜力。此外,通过结合GPT-4与LLaMA3-70B,我们的模型在多跳表格-文本问答的提示方法中取得了最先进的性能。