This paper introduces PRobELM (Plausibility Ranking Evaluation for Language Models), a benchmark designed to assess language models' ability to discern more plausible from less plausible scenarios through their parametric knowledge. While benchmarks such as TruthfulQA emphasise factual accuracy or truthfulness, and others such as COPA explore plausible scenarios without explicitly incorporating world knowledge, PRobELM seeks to bridge this gap by evaluating models' capabilities to prioritise plausible scenarios that leverage world knowledge over less plausible alternatives. This design allows us to assess the potential of language models for downstream use cases such as literature-based discovery where the focus is on identifying information that is likely but not yet known. Our benchmark is constructed from a dataset curated from Wikidata edit histories, tailored to align the temporal bounds of the training data for the evaluated models. PRobELM facilitates the evaluation of language models across multiple prompting types, including statement, text completion, and question-answering. Experiments with 10 models of various sizes and architectures on the relationship between model scales, training recency, and plausibility performance, reveal that factual accuracy does not directly correlate with plausibility performance and that up-to-date training data enhances plausibility assessment across different model architectures.
翻译:本文提出了PRobELM(语言模型合理性排序评估基准),该基准旨在通过模型的参数化知识评估语言模型区分高合理性与低合理性场景的能力。现有基准如TruthfulQA侧重于事实准确性或真实性,而COPA等基准则探索合理性场景但未明确融入世界知识。PRobELM致力于填补这一空白,通过评估模型在利用世界知识时优先选择高合理性场景而非低合理性替代方案的能力来实现这一目标。该设计使我们能够评估语言模型在基于文献的发现等下游应用中的潜力,此类应用的核心在于识别可能成立但尚未被知晓的信息。我们的基准基于从Wikidata编辑历史中整理的数据集构建,并针对所评估模型的训练数据时间范围进行了专门调整。PRobELM支持通过陈述式、文本补全式和问答式等多种提示类型对语言模型进行评估。通过对10个不同规模和架构的模型进行实验,探究模型规模、训练数据时效性与合理性性能之间的关系,实验结果表明:事实准确性并不直接与合理性性能相关,且最新的训练数据能提升不同架构模型的合理性评估能力。