User authentication and fraud detection face growing challenges as digital systems expand and adversaries adopt increasingly sophisticated tactics. Traditional knowledge-based authentication remains rigid, requiring exact word-for-word string matches that fail to accommodate natural human memory and linguistic variation. Meanwhile, fraud-detection pipelines struggle to keep pace with rapidly evolving scam behaviors, leading to high false-positive rates and frequent retraining cycles required. This work introduces two complementary LLM-enabled solutions, namely, an LLM-assisted authentication mechanism that evaluates semantic correctness rather than exact wording, supported by document segmentation and a hybrid scoring method combining LLM judgement with cosine-similarity metrics and a RAG-based fraud-detection pipeline that grounds LLM reasoning in curated evidence to reduce hallucinations and adapt to emerging scam patterns without model retraining. Experiments show that the authentication system accepts 99.5% of legitimate non-exact answers while maintaining a 0,1% false-acceptance rate, and that the RAG-enhanced fraud detection reduces false positives from 17.2% to 35%. Together, these findings demonstrate that LLMs can significantly improve both usability and robustness in security workflows, offering a more adaptive , explainable, and human-aligned approach to authentication and fraud detection.
翻译:随着数字系统的扩展和攻击者采用日益复杂的策略,用户身份验证和欺诈检测面临越来越大的挑战。传统的基于知识的身份验证方法仍然僵化,需要逐字精确的字符串匹配,无法适应人类自然的记忆和语言变体。与此同时,欺诈检测流程难以跟上快速演变的诈骗行为,导致高误报率和频繁的模型再训练需求。本研究提出了两种互补的LLM赋能解决方案:一种是通过文档分割和结合LLM判断与余弦相似度度量的混合评分方法支持的LLM辅助身份验证机制,该机制评估语义正确性而非精确措辞;以及一种基于RAG的欺诈检测流程,该流程将LLM推理建立在经过筛选的证据之上,以减少幻觉并适应新出现的诈骗模式,而无需模型再训练。实验表明,该身份验证系统接受了99.5%的合法非精确答案,同时保持了0.1%的错误接受率;而RAG增强的欺诈检测将误报率从17.2%降低至3.5%。这些发现共同表明,LLM能够显著提高安全工作流程的可用性和鲁棒性,为身份验证和欺诈检测提供了一种更具适应性、可解释性且更符合人类习惯的方法。