Modern enterprise retrieval systems must handle short, underspecified queries such as ``foreign transaction fee refund'' and ``recent check status''. In these cases, semantic nuance and metadata matter but per-query large language model (LLM) re-ranking and manual labeling are costly. We present Metadata-Aware Cross-Model Alignment (MACA), which distills a calibrated metadata aware LLM re-ranker into a compact student retriever, avoiding online LLM calls. A metadata-aware prompt verifies the teacher's trustworthiness by checking consistency under permutations and robustness to paraphrases, then supplies listwise scores, hard negatives, and calibrated relevance margins. The student trains with MACA's MetaFusion objective, which combines a metadata conditioned ranking loss with a cross model margin loss so it learns to push the correct answer above semantically similar candidates with mismatched topic, sub-topic, or entity. On a proprietary consumer banking FAQ corpus and BankFAQs, the MACA teacher surpasses a MAFA baseline at Accuracy@1 by five points on the proprietary set and three points on BankFAQs. MACA students substantially outperform pretrained encoders; e.g., on the proprietary corpus MiniLM Accuracy@1 improves from 0.23 to 0.48, while keeping inference free of LLM calls and supporting retrieval-augmented generation.
翻译:现代企业检索系统必须处理简短且欠规范的查询,例如“境外交易费用退款”和“近期支票状态查询”。在此类场景中,语义细微差异和元数据至关重要,但基于每查询的大语言模型(LLM)重排序与人工标注成本高昂。本文提出元数据感知跨模型对齐框架(MACA),其将经过校准的元数据感知型LLM重排序器蒸馏为紧凑的学生检索器,从而避免在线LLM调用。该框架通过元数据感知提示机制,基于排列一致性检验与释义鲁棒性验证来确认教师模型的可信度,随后提供列表级评分、困难负样本及校准的相关性边界。学生模型通过MACA的元融合目标进行训练,该目标结合了元数据条件排序损失与跨模型边界损失,使其能够学习将正确答案排序至语义相似但主题、子主题或实体不匹配的候选结果之上。在专有消费者银行FAQ语料库及BankFAQs数据集上的实验表明:MACA教师模型在专有数据集上的Accuracy@1指标较MAFA基线提升五个百分点,在BankFAQs数据集上提升三个百分点。MACA学生模型显著优于预训练编码器;例如在专有语料库中,MiniLM的Accuracy@1从0.23提升至0.48,同时保持推理过程无需LLM调用,并支持检索增强生成。