2D Matryoshka Training is an advanced embedding representation training approach designed to train an encoder model simultaneously across various layer-dimension setups. This method has demonstrated higher effectiveness in Semantic Text Similarity (STS) tasks over traditional training approaches when using sub-layers for embeddings. Despite its success, discrepancies exist between two published implementations, leading to varied comparative results with baseline models. In this reproducibility study, we implement and evaluate both versions of 2D Matryoshka Training on STS tasks and extend our analysis to retrieval tasks. Our findings indicate that while both versions achieve higher effectiveness than traditional Matryoshka training on sub-dimensions, and traditional full-sized model training approaches, they do not outperform models trained separately on specific sub-layer and sub-dimension setups. Moreover, these results generalize well to retrieval tasks, both in supervised (MSMARCO) and zero-shot (BEIR) settings. Further explorations of different loss computations reveals more suitable implementations for retrieval tasks, such as incorporating full-dimension loss and training on a broader range of target dimensions. Conversely, some intuitive approaches, such as fixing document encoders to full model outputs, do not yield improvements. Our reproduction code is available at https://github.com/ielab/2DMSE-Reproduce.
翻译:二维嵌套训练是一种先进的嵌入表示训练方法,旨在同时训练编码器模型以适应不同的层维度设置。与传统训练方法相比,在使用子层进行嵌入时,该方法在语义文本相似性任务中表现出更高的有效性。尽管取得了成功,但两种已发布的实现方式之间存在差异,导致与基线模型的比较结果各不相同。在这项可重复性研究中,我们在STS任务上实现并评估了两种版本的二维嵌套训练,并将分析扩展到检索任务。我们的研究结果表明,虽然两种版本在子维度上比传统的嵌套训练以及传统的全尺寸模型训练方法更有效,但它们并未优于在特定子层和子维度设置下单独训练的模型。此外,这些结果在监督(MSMARCO)和零样本(BEIR)设置下的检索任务中均具有良好的泛化能力。对不同损失计算的进一步探索揭示了更适合检索任务的实现方式,例如结合全维度损失并在更广泛的目标维度上进行训练。相反,一些直观的方法,如将文档编码器固定为全模型输出,并未带来改进。我们的重现代码可在 https://github.com/ielab/2DMSE-Reproduce 获取。