In this survey, we address the key challenges in Large Language Models (LLM) research, focusing on the importance of interpretability. Driven by increasing interest from AI and business sectors, we highlight the need for transparency in LLMs. We examine the dual paths in current LLM research and eXplainable Artificial Intelligence (XAI): enhancing performance through XAI and the emerging focus on model interpretability. Our paper advocates for a balanced approach that values interpretability equally with functional advancements. Recognizing the rapid development in LLM research, our survey includes both peer-reviewed and preprint (arXiv) papers, offering a comprehensive overview of XAI's role in LLM research. We conclude by urging the research community to advance both LLM and XAI fields together.
翻译:本综述聚焦于大型语言模型(LLM)研究中的关键挑战,重点探讨可解释性的重要意义。在人工智能与商业领域日益增长的兴趣驱动下,我们强调LLM透明化的必要性。我们审视当前LLM研究与可解释人工智能(XAI)的两条发展路径:通过XAI提升模型性能,以及日益受到关注的模型可解释性研究。本文主张采取平衡的发展策略,将可解释性与功能提升置于同等重要的地位。鉴于LLM研究的快速发展,本综述同时涵盖同行评议文献与预印本(arXiv)论文,全面阐述XAI在LLM研究中的作用。最后,我们呼吁研究界协同推进LLM与XAI领域的共同发展。