Recent advances in Large Language Models (LLMs) have significantly improved table understanding tasks such as Table Question Answering (TableQA), yet challenges remain in ensuring reliability, scalability, and efficiency, especially in resource-constrained or privacy-sensitive environments. In this paper, we introduce MATA, a multi-agent TableQA framework that leverages multiple complementary reasoning paths and a set of tools built with small language models. MATA generates candidate answers through diverse reasoning styles for a given table and question, then refines or selects the optimal answer with the help of these tools. Furthermore, it incorporates an algorithm designed to minimize expensive LLM agent calls, enhancing overall efficiency. MATA maintains strong performance with small, open-source models and adapts easily across various LLM types. Extensive experiments on two benchmarks of varying difficulty with ten different LLMs demonstrate that MATA achieves state-of-the-art accuracy and highly efficient reasoning while avoiding excessive LLM inference. Our results highlight that careful orchestration of multiple reasoning pathways yields scalable and reliable TableQA. The code is available at https://github.com/AIDAS-Lab/MATA.
翻译:大型语言模型(LLM)的最新进展显著提升了表格理解任务(如表问答)的性能,但在确保可靠性、可扩展性和效率方面仍存在挑战,尤其是在资源受限或隐私敏感的环境中。本文提出MATA,一种基于多智能体的表问答框架,该框架利用多条互补推理路径和一组由小型语言模型构建的工具集。MATA针对给定表格和问题通过多样化推理风格生成候选答案,随后借助工具集对答案进行优化或选择最优解。此外,框架引入了一种旨在减少昂贵LLM智能体调用的算法,从而提升整体效率。MATA在小型开源模型上保持强劲性能,并能轻松适配各类LLM。通过在两个不同难度基准数据集上对十种LLM开展的广泛实验表明,MATA在避免过度LLM推理的同时,实现了最先进的准确率与高效推理能力。我们的研究结果证明,对多推理路径的精细编排能够实现可扩展且可靠的表格问答。代码已开源:https://github.com/AIDAS-Lab/MATA。