Model interpretability is crucial for understanding and trusting the decisions made by complex machine learning models, such as those built with XGBoost. SHAP (SHapley Additive exPlanations) values have become a popular tool for interpreting these models by attributing the output to individual features. However, the technical nature of SHAP explanations often limits their utility to researchers, leaving non-technical end-users struggling to understand the model's behavior. To address this challenge, we explore the use of Large Language Models (LLMs) to translate SHAP value outputs into plain language explanations that are more accessible to non-technical audiences. By applying a pre-trained LLM, we generate explanations that maintain the accuracy of SHAP values while significantly improving their clarity and usability for end users. Our results demonstrate that LLM-enhanced SHAP explanations provide a more intuitive understanding of model predictions, thereby enhancing the overall interpretability of machine learning models. Future work will explore further customization, multimodal explanations, and user feedback mechanisms to refine and expand the approach.
翻译:模型可解释性对于理解和信任复杂机器学习模型(例如使用XGBoost构建的模型)的决策至关重要。SHAP(SHapley Additive exPlanations)值通过将模型输出归因于各个特征,已成为解释此类模型的常用工具。然而,SHAP解释的技术特性往往使其仅限于研究人员使用,导致非技术性终端用户难以理解模型行为。为应对这一挑战,我们探索利用大型语言模型将SHAP值输出转换为更易于非技术受众理解的通俗语言解释。通过应用预训练的大型语言模型,我们生成的解释在保持SHAP值准确性的同时,显著提升了终端用户理解的清晰度和使用便利性。实验结果表明,经大型语言模型增强的SHAP解释能为模型预测提供更直观的理解,从而全面提升机器学习模型的可解释性。未来工作将探索进一步定制化、多模态解释机制以及用户反馈系统,以完善和拓展该方法。