We introduce TableLLM, a robust large language model (LLM) with 8 billion parameters, purpose-built for proficiently handling tabular data manipulation tasks, whether they are embedded within documents or spreadsheets, catering to real-world office scenarios. We propose a distant supervision method for training, which comprises a reasoning process extension strategy, aiding in training LLMs to understand reasoning patterns more effectively as well as a cross-way validation strategy, ensuring the quality of the automatically generated data. To evaluate the performance of TableLLM, we have crafted benchmarks tailored to address both document and spreadsheet formats as well as constructed a well-organized evaluation pipeline capable of handling both scenarios. Thorough evaluations underscore the advantages of TableLLM when compared to various existing general-purpose and tabular data-focused LLMs. We have publicly released the model checkpoint, source code, benchmarks, and a web application for user interaction. Our codes and data are publicly available at https://github.com/TableLLM/TableLLM.
翻译:本文提出TableLLM,一个拥有80亿参数的鲁棒性大型语言模型(LLM),专为高效处理表格数据操作任务而设计。该模型能够处理嵌入文档或电子表格中的表格数据,适用于真实办公场景。我们提出一种远程监督训练方法,包含推理过程扩展策略——帮助训练LLM更有效地理解推理模式,以及跨路径验证策略——确保自动生成数据的质量。为评估TableLLM的性能,我们构建了分别针对文档和电子表格格式的基准测试集,并设计了一个能够同时处理两种场景的规范化评估流程。全面实验评估表明,与现有多种通用型及表格数据专用LLM相比,TableLLM具有显著优势。我们已公开模型检查点、源代码、基准测试集以及用户交互网页应用。代码与数据公开于https://github.com/TableLLM/TableLLM。