Explainable artificial intelligence (XAI) enables data-driven understanding of factor associations with response variables, yet communicating XAI outputs to laypersons remains challenging, hindering trust in AI-based predictions. Large language models (LLMs) have emerged as promising tools for translating technical explanations into accessible narratives, yet the integration of agentic AI, where LLMs operate as autonomous agents through iterative refinement, with XAI remains unexplored. This study proposes an agentic XAI framework combining SHAP-based explainability with multimodal LLM-driven iterative refinement to generate progressively enhanced explanations. As a use case, we tested this framework as an agricultural recommendation system using rice yield data from 26 fields in Japan. The Agentic XAI initially provided a SHAP result and explored how to improve the explanation through additional analysis iteratively across 11 refinement rounds (Rounds 0-10). Explanations were evaluated by human experts (crop scientists) (n=12) and LLMs (n=14) against seven metrics: Specificity, Clarity, Conciseness, Practicality, Contextual Relevance, Cost Consideration, and Crop Science Credibility. Both evaluator groups confirmed that the framework successfully enhanced recommendation quality with an average score increase of 30-33% from Round 0, peaking at Rounds 3-4. However, excessive refinement showed a substantial drop in recommendation quality, indicating a bias-variance trade-off where early rounds lacked explanation depth (bias) while excessive iteration introduced verbosity and ungrounded abstraction (variance), as revealed by metric-specific analysis. These findings suggest that strategic early stopping (regularization) is needed for optimizing practical utility, challenging assumptions about monotonic improvement and providing evidence-based design principles for agentic XAI systems.
翻译:可解释人工智能(XAI)能够实现基于数据的因子与响应变量关联性理解,然而向非专业人士传达XAI输出结果仍具挑战性,这阻碍了人们对基于AI预测的信任。大型语言模型(LLM)已成为将技术性解释转化为可理解叙述的有力工具,但代理式人工智能(即LLM通过迭代优化作为自主代理运行)与XAI的结合尚未得到探索。本研究提出一种代理式XAI框架,将基于SHAP的可解释性与多模态LLM驱动的迭代优化相结合,以生成渐进增强的解释。作为应用案例,我们使用日本26块稻田的产量数据,将该框架测试为农业推荐系统。Agentic XAI最初提供SHAP结果,并通过11轮优化迭代(第0-10轮)探索如何通过额外分析改进解释。解释由人类专家(作物科学家)(n=12)和LLM(n=14)依据七项指标进行评估:特异性、清晰度、简洁性、实用性、情境相关性、成本考量与作物科学可信度。两组评估者均确认该框架成功提升了推荐质量,平均得分较第0轮提高30-33%,在第3-4轮达到峰值。然而,过度优化导致推荐质量大幅下降,这揭示了偏差-方差权衡现象:早期轮次缺乏解释深度(偏差),而过度迭代则引发冗长表述与无根据的抽象化(方差),此结论通过指标专项分析得以揭示。这些发现表明,优化实际效用需要策略性早停(正则化),这对单调改进的假设提出了挑战,并为代理式XAI系统提供了基于证据的设计原则。