There is a growing line of research on verifying the correctness of language models' outputs. At the same time, LMs are being used to tackle complex queries that require reasoning. We introduce CoverBench, a challenging benchmark focused on verifying LM outputs in complex reasoning settings. Datasets that can be used for this purpose are often designed for other complex reasoning tasks (e.g., QA) targeting specific use-cases (e.g., financial tables), requiring transformations, negative sampling and selection of hard examples to collect such a benchmark. CoverBench provides a diversified evaluation for complex claim verification in a variety of domains, types of reasoning, relatively long inputs, and a variety of standardizations, such as multiple representations for tables where available, and a consistent schema. We manually vet the data for quality to ensure low levels of label noise. Finally, we report a variety of competitive baseline results to show CoverBench is challenging and has very significant headroom. The data is available at https://huggingface.co/datasets/google/coverbench .
翻译:针对验证语言模型输出正确性的研究日益增多。与此同时,语言模型正被用于处理需要推理的复杂查询。我们提出了CoverBench,这是一个专注于复杂推理场景中验证语言模型输出的挑战性基准。可用于此目的的数据集通常是为其他复杂推理任务(例如问答)设计的,针对特定用例(例如金融表格),需要经过数据转换、负采样和困难样本筛选才能收集此类基准。CoverBench为多领域、多推理类型、较长输入及多种标准化形式的复杂声明验证提供了多样化评估,例如在可用时提供表格的多种表示形式以及一致的架构规范。我们通过人工审核确保数据质量,以维持较低的标签噪声水平。最后,我们报告了多种竞争性基线结果,表明CoverBench具有挑战性且存在显著的性能提升空间。数据集可通过 https://huggingface.co/datasets/google/coverbench 获取。