Retrieval-Augmented Generation (RAG) systems are increasingly deployed in high-stakes domains, where safety depends not only on how a system answers, but also on whether a query should be answered given a knowledge base (KB). Out-of-domain (OOD) queries can cause dense retrieval to surface weakly related context and lead the generator to produce fluent but unjustified responses. We study lightweight, KB-aligned OOD detection as an always-on gate for RAG systems. Our approach applies PCA to KB embeddings and scores queries in a compact subspace selected either by explained-variance retention (EVR) or by a separability-driven t-test ranking. We evaluate geometric semantic-search rules and lightweight classifiers across 16 domains, including high-stakes COVID-19 and Substance Use KBs, and stress-test robustness using both LLM-generated attacks and an in-the-wild 4chan attack. We find that low-dimensional detectors achieve competitive OOD performance while being faster, cheaper, and more interpretable than prompted LLM-based judges. Finally, human and LLM-based evaluations show that OOD queries primarily degrade the relevance of RAG outputs, showing the need for efficient external OOD detection to maintain safe, in-scope behavior.
翻译:检索增强生成(RAG)系统正日益部署于高风险领域,其安全性不仅取决于系统如何回答问题,还取决于在给定知识库(KB)的情况下是否应该回答某个查询。离域(OOD)查询可能导致密集检索返回弱相关的上下文,并致使生成器产生流畅但缺乏依据的响应。我们研究将轻量级、知识库对齐的OOD检测作为RAG系统的常开“门控”机制。我们的方法对知识库嵌入应用主成分分析(PCA),并在一个紧凑的子空间中对查询进行评分,该子空间通过解释方差保留(EVR)或基于可分离性的t检验排序来选择。我们在16个领域(包括高风险的新冠肺炎和药物使用知识库)评估了几何语义搜索规则和轻量级分类器,并使用LLM生成的攻击和真实世界的4chan攻击进行压力测试以评估鲁棒性。我们发现,低维检测器在实现有竞争力的OOD检测性能的同时,比基于提示的LLM评判器更快、更经济且更具可解释性。最后,基于人类和LLM的评估表明,OOD查询主要降低了RAG输出的相关性,这凸显了需要高效的外部OOD检测来维持安全、在域内的行为。