We are currently in an era of fierce competition among various large language models (LLMs) continuously pushing the boundaries of benchmark performance. However, genuinely assessing the capabilities of these LLMs has become a challenging and critical issue due to potential data contamination, and it wastes dozens of time and effort for researchers and engineers to download and try those contaminated models. To save our precious time, we propose a novel and useful method, Clean-Eval, which mitigates the issue of data contamination and evaluates the LLMs in a cleaner manner. Clean-Eval employs an LLM to paraphrase and back-translate the contaminated data into a candidate set, generating expressions with the same meaning but in different surface forms. A semantic detector is then used to filter the generated low-quality samples to narrow down this candidate set. The best candidate is finally selected from this set based on the BLEURT score. According to human assessment, this best candidate is semantically similar to the original contamination data but expressed differently. All candidates can form a new benchmark to evaluate the model. Our experiments illustrate that Clean-Eval substantially restores the actual evaluation results on contaminated LLMs under both few-shot learning and fine-tuning scenarios.
翻译:当前,各类大语言模型正处于激烈竞争的时代,其基准性能不断突破极限。然而,由于潜在的数据污染问题,真实评估这些大语言模型的能力已成为一项严峻且关键的任务,且研究人员与工程师下载并尝试这些受污染模型会浪费大量时间与精力。为节省宝贵时间,我们提出了一种新颖且实用的方法——Clean-Eval,该方法可缓解数据污染问题,并以更纯净的方式评估大语言模型。Clean-Eval 利用一个大语言模型对受污染数据进行释义与回译,生成语义相同但表层形式各异的表达集合。随后,通过语义检测器过滤生成的低质量样本以缩小候选集范围,最终基于 BLEURT 分数从该集合中选出最佳候选。人工评估表明,该最佳候选与原始污染数据语义相似但表达形式不同。所有候选样本可共同构成一个用于评估模型的新基准。我们的实验表明,Clean-Eval 在少样本学习与微调两种场景下,均能显著恢复受污染大语言模型的真实评估结果。