User authentication and fraud detection face growing challenges as digital systems expand and adversaries adopt increasingly sophisticated tactics. Traditional knowledge-based authentication remains rigid, requiring exact word-for-word string matches that fail to accommodate natural human memory and linguistic variation. Meanwhile, fraud-detection pipelines struggle to keep pace with rapidly evolving scam behaviors, leading to high false-positive rates and frequent retraining cycles required. This work introduces two complementary LLM-enabled solutions, namely, an LLM-assisted authentication mechanism that evaluates semantic correctness rather than exact wording, supported by document segmentation and a hybrid scoring method combining LLM judgement with cosine-similarity metrics and a RAG-based fraud-detection pipeline that grounds LLM reasoning in curated evidence to reduce hallucinations and adapt to emerging scam patterns without model retraining. Experiments show that the authentication system accepts 99.5% of legitimate non-exact answers while maintaining a 0.1% false-acceptance rate, and that the RAG-enhanced fraud detection reduces false positives from 17.2% to 3.5%. Together, these findings demonstrate that LLMs can significantly improve both usability and robustness in security workflows, offering a more adaptive , explainable, and human-aligned approach to authentication and fraud detection.
翻译:随着数字系统的扩展和攻击者采用日益复杂的策略,用户身份认证与欺诈检测面临着日益严峻的挑战。传统的基于知识的认证方式仍然僵化,要求精确的字词匹配,无法适应人类自然的记忆和语言变体。与此同时,欺诈检测流程难以跟上快速演变的诈骗行为,导致高误报率和频繁的模型重训练需求。本研究提出了两种互补的LLM赋能解决方案:一种是通过文档分割和结合LLM判断与余弦相似度度量的混合评分方法支持的LLM辅助认证机制,该机制评估语义正确性而非精确措辞;以及一种基于检索增强生成(RAG)的欺诈检测流程,该流程将LLM推理建立在精心筛选的证据之上,以减少幻觉并适应新兴的诈骗模式,而无需模型重训练。实验表明,该认证系统接受了99.5%的合法非精确答案,同时保持了0.1%的误接受率;而RAG增强的欺诈检测将误报率从17.2%降低至3.5%。这些发现共同表明,LLM能够显著提升安全流程的可用性和鲁棒性,为身份认证与欺诈检测提供了一种更具适应性、可解释性且更符合人类习惯的方法。