Process reward models (PRMs) improve complex reasoning in large language models (LLMs) by grading candidate solutions step-by-step and selecting answers via aggregated step scores. While effective in domains such as mathematics, their applicability to tasks involving semi-structured data, like table question answering (TQA) remains unexplored. TQA poses unique challenges for PRMs, including abundant irrelevant information, loosely connected reasoning steps, and domain-specific reasoning. This work presents the first systematic study of PRMs for TQA. We evaluate state-of-the-art generative PRMs on TQA from both answer and step perspectives. Results show that PRMs that combine textual and code verification can aid solution selection but struggle to generalize to out-of-domain data. Analysis reveals a weak correlation between performance in step-level verification and answer accuracy, possibly stemming from weak step dependencies and loose causal links. Our findings highlight limitations of current PRMs on TQA and offer valuable insights for building more robust, process-aware verifiers.
翻译:过程奖励模型(PRMs)通过逐步评估候选解决方案并依据聚合步骤得分选择答案,从而提升大型语言模型(LLMs)在复杂推理任务中的表现。尽管其在数学等领域已证明有效,但其在涉及半结构化数据(如表格式问答(TQA))任务中的适用性仍未得到探索。TQA为PRMs带来了独特挑战,包括大量无关信息、松散关联的推理步骤以及领域特定的推理逻辑。本研究首次对TQA任务中的PRMs进行了系统性探讨。我们从答案与步骤两个维度评估了当前最先进的生成式PRMs在TQA上的表现。结果表明,结合文本与代码验证的PRMs能够辅助解决方案选择,但难以泛化至领域外数据。分析显示,步骤级验证性能与答案准确性之间相关性较弱,这可能源于步骤间依赖关系薄弱及因果联系松散。我们的研究结果揭示了当前PRMs在TQA任务中的局限性,并为构建更稳健、具备过程感知能力的验证器提供了重要启示。