To think critically about arguments, human learners are trained to identify, reconstruct, and evaluate arguments. Argument reconstruction is especially important because it makes an argument's underlying inferences explicit. However, it remains unclear whether LLMs can similarly enhance their critical thinking ability by learning to reconstruct arguments. To address this question, we introduce a holistic framework with three contributions. We (1) propose an engine that automatically reconstructs arbitrary arguments (GAAR), (2) synthesize a new high-quality argument reconstruction dataset (Arguinas) using the GAAR engine, and (3) investigate whether learning argument reconstruction benefits downstream critical thinking tasks. Our experimental results show that, across seven critical thinking tasks, models trained to learn argument reconstruction outperform models that do not, with the largest performance gains observed when training on the proposed Arguinas dataset. The source code and dataset will be publicly available.
翻译:为对论据进行批判性思考,人类学习者被训练去识别、重构和评估论据。论据重构尤为重要,因为它使论据的潜在推理过程变得显式。然而,目前尚不清楚大语言模型是否能够通过学会重构论据来类似地提升其批判性思维能力。为探究此问题,我们提出了一个包含三项贡献的整体框架。我们(1)提出了一个能自动重构任意论据的引擎(GAAR),(2)利用该GAAR引擎合成一个新的高质量论据重构数据集(Arguinas),以及(3)研究学习论据重构是否有利于下游的批判性思维任务。我们的实验结果表明,在七项批判性思维任务中,经过论据重构训练的模型均优于未经此训练的模型,其中,在提出的Arguinas数据集上进行训练时观察到了最大的性能提升。源代码和数据集将公开提供。