Retrieval-Augmented Generation systems depend on retrieving semantically relevant document chunks to support accurate, grounded outputs from large language models. In structured and repetitive corpora such as regulatory filings, chunk similarity alone often fails to distinguish between documents with overlapping language. Practitioners often flatten metadata into input text as a heuristic, but the impact and trade-offs of this practice remain poorly understood. We present a systematic study of metadata-aware retrieval strategies, comparing plain-text baselines with approaches that embed metadata directly. Our evaluation spans metadata-as-text (prefix and suffix), a dual-encoder unified embedding that fuses metadata and content in a single index, dual-encoder late-fusion retrieval, and metadata-aware query reformulation. Across multiple retrieval metrics and question types, we find that prefixing and unified embeddings consistently outperform plain-text baselines, with the unified at times exceeding prefixing while being easier to maintain. Beyond empirical comparisons, we analyze embedding space, showing that metadata integration improves effectiveness by increasing intra-document cohesion, reducing inter-document confusion, and widening the separation between relevant and irrelevant chunks. Field-level ablations show that structural cues provide strong disambiguating signals. Our code, evaluation framework, and the RAGMATE-10K dataset are publicly hosted.
翻译:检索增强生成系统依赖于检索语义相关的文档片段,以支持大型语言模型生成准确且有依据的输出。在监管文件等结构化和重复性语料库中,仅依靠片段相似性往往难以区分语言重叠的文档。实践者通常将元数据作为启发式方法平铺到输入文本中,但这一做法的影响与权衡仍鲜为人知。我们对元数据感知的检索策略进行了系统性研究,比较了纯文本基线方法与直接嵌入元数据的方法。我们的评估涵盖了元数据作为文本(前缀和后缀)、将元数据与内容融合在单一索引中的双编码器统一嵌入、双编码器后融合检索,以及元数据感知的查询重构。在多种检索指标和问题类型中,我们发现前缀法和统一嵌入法始终优于纯文本基线,其中统一嵌入法有时甚至超越前缀法,且更易于维护。除实证比较外,我们还分析了嵌入空间,表明元数据集成通过增强文档内聚性、减少文档间混淆以及扩大相关与无关片段之间的分离度,从而提升了检索效果。字段级消融实验显示,结构线索提供了强大的消歧信号。我们的代码、评估框架以及RAGMATE-10K数据集均已公开托管。