I combine detection and mitigation techniques to addresses hallucinations in Large Language Models (LLMs). Mitigation is achieved in a question-answering Retrieval-Augmented Generation (RAG) framework while detection is obtained by introducing the Negative Missing Information Scoring System (NMISS), which accounts for contextual relevance in responses. While RAG mitigates hallucinations by grounding answers in external data, NMISS refines the evaluation by identifying cases where traditional metrics incorrectly flag contextually accurate responses as hallucinations. I use Italian health news articles as context to evaluate LLM performance. Results show that Gemma2 and GPT-4 outperform the other models, with GPT-4 producing answers closely aligned with reference responses. Mid-tier models, such as Llama2, Llama3, and Mistral benefit significantly from NMISS, highlighting their ability to provide richer contextual information. This combined approach offers new insights into the reduction and more accurate assessment of hallucinations in LLMs, with applications in real-world healthcare tasks and other domains.
翻译:本研究综合运用检测与缓解技术以应对大型语言模型(LLM)中的幻觉问题。缓解机制通过问答式检索增强生成(RAG)框架实现,而检测机制则通过引入负向缺失信息评分系统(NMISS)完成,该系统能有效评估回答的上下文相关性。RAG通过将答案锚定于外部数据来缓解幻觉,而NMISS则通过识别传统评估指标错误地将上下文准确回答误判为幻觉的情况,从而优化评估体系。本研究以意大利健康新闻文章作为上下文语料评估LLM性能。结果表明,Gemma2与GPT-4模型表现优于其他模型,其中GPT-4生成的答案与参考回答高度吻合。中阶模型(如Llama2、Llama3和Mistral)在NMISS评估中表现显著提升,凸显其提供更丰富上下文信息的能力。这种组合方法为减少LLM幻觉并实现更精准评估提供了新思路,在医疗健康及其他领域的实际任务中具有应用价值。