Graph representation learning, a critical step in graph-centric tasks, has seen significant advancements. Earlier techniques often operate in an end-to-end setting, where performance heavily relies on the availability of ample labeled data. This constraint has spurred the emergence of few-shot learning on graphs, where only a few task-specific labels are available for each task. Given the extensive literature in this field, this survey endeavors to synthesize recent developments, provide comparative insights, and identify future directions. We systematically categorize existing studies into three major families: meta-learning approaches, pre-training approaches, and hybrid approaches, with a finer-grained classification in each family to aid readers in their method selection process. Within each category, we analyze the relationships among these methods and compare their strengths and limitations. Finally, we outline prospective future directions for few-shot learning on graphs to catalyze continued innovation in this field.
翻译:图表示学习作为图中心任务的关键步骤,已取得显著进展。早期技术通常采用端到端设置,其性能高度依赖于充足标注数据的可用性。这一限制催生了图上少样本学习的发展——在每个任务中仅需少量任务特定标签即可。鉴于该领域文献浩繁,本综述致力于综合最新进展、提供比较性洞察并指明未来方向。我们系统地将现有研究划分为三大主流:元学习方法、预训练方法与混合方法,并在每类方法中进行更细粒度的分类以帮助读者选择方法。在每个类别中,我们分析了方法间的内在关联,并比较了各自的优势与局限。最后,我们勾勒出图上少样本学习的潜在未来方向,以推动该领域的持续创新。