Chart summarization, which focuses on extracting key information from charts and interpreting it in natural language, is crucial for generating and delivering insights through effective and accessible data analysis. Traditional methods for chart understanding and summarization often rely on multi-stage pipelines, which may produce suboptimal semantic alignment between visual and textual information. In comparison, recently developed LLM-based methods are more dependent on the capability of foundation images or languages, while ignoring the characteristics of chart data and its relevant challenges. To address these limitations, we propose ChartAdapter, a novel lightweight transformer module designed to bridge the gap between charts and textual summaries. ChartAdapter employs learnable query vectors to extract implicit semantics from chart data and incorporates a cross-modal alignment projector to enhance vision-to-language generative learning. By integrating ChartAdapter with an LLM, we enable end-to-end training and efficient chart summarization. To further enhance the training, we introduce a three-stage hierarchical training procedure and develop a large-scale dataset specifically curated for chart summarization, comprising 190,618 samples. Experimental results on the standard Chart-to-Text testing set demonstrate that our approach significantly outperforms existing methods, including state-of-the-art models, in generating high-quality chart summaries. Ablation studies further validate the effectiveness of key components in ChartAdapter. This work highlights the potential of tailored LLM-based approaches to advance chart understanding and sets a strong foundation for future research in this area.
翻译:图表摘要旨在从图表中提取关键信息并以自然语言进行解释,对于通过有效且可访问的数据分析生成和传递洞察至关重要。传统的图表理解与摘要方法通常依赖于多阶段流水线,可能导致视觉与文本信息之间的语义对齐欠佳。相比之下,近期开发的基于大语言模型的方法更依赖于基础图像或语言模型的能力,而忽视了图表数据的特性及其相关挑战。为应对这些局限,我们提出ChartAdapter——一种新颖的轻量级Transformer模块,旨在弥合图表与文本摘要之间的鸿沟。ChartAdapter采用可学习的查询向量从图表数据中提取隐式语义,并结合跨模态对齐投影器以增强视觉到语言的生成学习。通过将ChartAdapter与大语言模型集成,我们实现了端到端训练与高效的图表摘要。为进一步优化训练,我们引入了三阶段分层训练流程,并构建了专门针对图表摘要的大规模数据集,包含190,618个样本。在标准Chart-to-Text测试集上的实验结果表明,我们的方法在生成高质量图表摘要方面显著优于现有方法(包括最先进模型)。消融研究进一步验证了ChartAdapter中关键组件的有效性。本工作凸显了定制化大语言模型方法在推进图表理解方面的潜力,并为该领域的未来研究奠定了坚实基础。