Benchmarking is the de-facto standard for evaluating LLMs, due to its speed, replicability and low cost. However, recent work has pointed out that the majority of the open source benchmarks available today have been contaminated or leaked into LLMs, meaning that LLMs have access to test data during pretraining and/or fine-tuning. This raises serious concerns about the validity of benchmarking studies conducted so far and the future of evaluation using benchmarks. To solve this problem, we propose Private Benchmarking, a solution where test datasets are kept private and models are evaluated without revealing the test data to the model. We describe various scenarios (depending on the trust placed on model owners or dataset owners), and present solutions to avoid data contamination using private benchmarking. For scenarios where the model weights need to be kept private, we describe solutions from confidential computing and cryptography that can aid in private benchmarking. We build an end-to-end system, TRUCE, that enables such private benchmarking showing that the overheads introduced to protect models and benchmark are negligible (in the case of confidential computing) and tractable (when cryptographic security is required). Finally, we also discuss solutions to the problem of benchmark dataset auditing, to ensure that private benchmarks are of sufficiently high quality.
翻译:基准测试因其快速、可复现和低成本的特点,已成为评估大语言模型的事实标准。然而,近期研究指出,当前绝大多数开源基准测试集已遭到污染或泄露至大语言模型中,这意味着大语言模型在预训练和/或微调阶段已接触过测试数据。这对既有基准测试研究的有效性以及未来基于基准测试的评估前景提出了严峻质疑。为解决此问题,我们提出私有基准测试方案,该方案将测试数据集保持私有状态,并在不向模型公开测试数据的前提下完成模型评估。我们描述了不同信任场景(取决于对模型所有者或数据集所有者的信任程度),并提出了利用私有基准测试避免数据污染的技术方案。针对需要保持模型权重私有的场景,我们阐述了基于机密计算和密码学的解决方案如何助力私有基准测试。我们构建了端到端系统TRUCE来实现此类私有基准测试,结果表明为保护模型和基准测试引入的开销可忽略不计(在机密计算场景下)或可有效处理(当需要密码学级安全时)。最后,我们还探讨了基准测试数据集审计问题的解决方案,以确保私有基准测试具备足够高的质量。