Despite significant advancements in report generation methods, a critical limitation remains: the lack of interpretability in the generated text. This paper introduces an innovative approach to enhance the explainability of text generated by report generation models. Our method employs cyclic text manipulation and visual comparison to identify and elucidate the features in the original content that influence the generated text. By manipulating the generated reports and producing corresponding images, we create a comparative framework that highlights key attributes and their impact on the text generation process. This approach not only identifies the image features aligned to the generated text but also improves transparency but also provides deeper insights into the decision-making mechanisms of the report generation models. Our findings demonstrate the potential of this method to significantly enhance the interpretability and transparency of AI-generated reports.
翻译:尽管报告生成方法取得了显著进展,但仍存在一个关键局限:生成文本缺乏可解释性。本文提出一种创新方法,旨在增强报告生成模型所产生文本的可解释性。该方法通过循环文本操控与视觉对比,识别并阐明原始内容中影响生成文本的特征。通过操控生成的报告并生成相应图像,我们构建了一个对比框架,以突显关键属性及其对文本生成过程的影响。此方法不仅能识别与生成文本对齐的图像特征,提高透明度,还能为报告生成模型的决策机制提供更深入的洞见。我们的研究结果表明,该方法在显著提升人工智能生成报告的可解释性与透明度方面具有巨大潜力。