Process reward models (PRMs) enhance complex reasoning in large language models (LLMs) by evaluating candidate solutions step-by-step and selecting answers based on aggregated step scores. While effective in domains such as mathematics, their applicability to tasks involving semi-structured data, like table question answering (TQA), remains unexplored. TQA poses unique challenges for PRMs, including abundant irrelevant information, loosely connected reasoning steps, and domain-specific reasoning. This work presents the first systematic study of PRMs for TQA. We evaluate state-of-the-art generative PRMs on TQA from both answer and step perspectives. Results show that PRMs that combine textual and code verification can aid solution selection but struggle to generalize to out-of-domain data. Analysis reveals a weak correlation between performance in step-level verification and answer accuracy, possibly stemming from weak step dependencies and loose causal links. Our findings highlight limitations of current PRMs on TQA and offer valuable insights for building more robust, process-aware verifiers.
翻译:过程奖励模型(PRMs)通过逐步评估候选解决方案并基于累积步骤得分选择答案,从而增强大型语言模型(LLMs)的复杂推理能力。尽管在数学等领域已证明有效,但其在涉及半结构化数据(如表格问答(TQA))任务中的适用性尚未得到探索。TQA对PRMs提出了独特挑战,包括大量无关信息、松散关联的推理步骤以及领域特定的推理逻辑。本研究首次对TQA中的PRMs进行了系统性探讨。我们从答案和步骤两个维度评估了当前最先进的生成式PRMs在TQA任务上的表现。结果表明,结合文本与代码验证的PRMs能够辅助解决方案选择,但难以泛化至领域外数据。分析发现步骤级验证性能与答案准确性之间相关性较弱,这可能源于步骤间依赖关系薄弱及因果联系松散。我们的研究结果揭示了当前PRMs在TQA任务中的局限性,并为构建更鲁棒、具备过程感知能力的验证器提供了重要见解。