Graph condensation (GC) is an emerging technique designed to learn a significantly smaller graph that retains the essential information of the original graph. This condensed graph has shown promise in accelerating graph neural networks while preserving performance comparable to those achieved with the original, larger graphs. Additionally, this technique facilitates downstream applications such as neural architecture search and enhances our understanding of redundancy in large graphs. Despite the rapid development of GC methods, a systematic evaluation framework remains absent, which is necessary to clarify the critical designs for particular evaluative aspects. Furthermore, several meaningful questions have not been investigated, such as whether GC inherently preserves certain graph properties and offers robustness even without targeted design efforts. In this paper, we introduce GC-Bench, a comprehensive framework to evaluate recent GC methods across multiple dimensions and to generate new insights. Our experimental findings provide a deeper insights into the GC process and the characteristics of condensed graphs, guiding future efforts in enhancing performance and exploring new applications. Our code is available at \url{https://github.com/Emory-Melody/GraphSlim/tree/main/benchmark}.
翻译:图压缩(GC)是一种新兴技术,旨在学习一个显著更小但仍保留原图核心信息的图。这种压缩图在保持与原始大图相当性能的同时,已展现出加速图神经网络训练的潜力。此外,该技术有助于促进神经架构搜索等下游应用,并深化我们对大图中冗余信息的理解。尽管GC方法发展迅速,目前仍缺乏系统性的评估框架,而此类框架对于阐明特定评估维度的关键设计至关重要。此外,若干重要问题尚未得到深入探究,例如GC是否固有地保持某些图属性,以及即使未经针对性设计是否仍能提供鲁棒性。本文提出GC-Bench,一个在多维度评估现有GC方法并生成新见解的综合性框架。我们的实验结果为理解GC过程及压缩图的特性提供了更深入的见解,为未来提升性能与探索新应用指明了方向。代码发布于\url{https://github.com/Emory-Melody/GraphSlim/tree/main/benchmark}。