Neural approaches to ranking based on pre-trained language models are highly effective in ad-hoc search. However, the computational expense of these models can limit their application. As such, a process known as knowledge distillation is frequently applied to allow a smaller, efficient model to learn from an effective but expensive model. A key example of this is the distillation of expensive API-based commercial Large Language Models into smaller production-ready models. However, due to the opacity of training data and processes of most commercial models, one cannot ensure that a chosen test collection has not been observed previously, creating the potential for inadvertent data contamination. We, therefore, investigate the effect of a contaminated teacher model in a distillation setting. We evaluate several distillation techniques to assess the degree to which contamination occurs during distillation. By simulating a ``worst-case'' setting where the degree of contamination is known, we find that contamination occurs even when the test data represents a small fraction of the teacher's training samples. We, therefore, encourage caution when training using black-box teacher models where data provenance is ambiguous.
翻译:基于预训练语言模型的神经排序方法在特定搜索任务中表现出色。然而,这些模型的计算成本可能限制其实际应用。因此,知识蒸馏技术常被用于使更小、更高效的模型从效果优异但计算昂贵的模型中学习。一个典型例子是将基于API的商业大型语言模型蒸馏为更小、可投入生产的模型。然而,由于大多数商业模型的训练数据和过程不透明,无法确保所选测试集未被先前观测,这可能导致无意的数据污染。本研究因此探究了蒸馏场景中受污染教师模型的影响。我们评估了多种蒸馏技术以衡量污染发生的程度。通过模拟已知污染程度的"最坏情况"场景,我们发现即使测试数据仅占教师模型训练样本的极小部分,污染依然会发生。因此,我们建议在使用数据来源不明的黑盒教师模型进行训练时需保持谨慎。