Graph representation learning, a critical step in graph-centric tasks, has seen significant advancements. Earlier techniques often operate in an end-to-end setting, where performance heavily relies on the availability of ample labeled data. This constraint has spurred the emergence of few-shot learning on graphs, where only a few task-specific labels are available for each task. Given the extensive literature in this field, this survey endeavors to synthesize recent developments, provide comparative insights, and identify future directions. We systematically categorize existing studies into three major families: meta-learning approaches, pre-training approaches, and hybrid approaches, with a finer-grained classification in each family to aid readers in their method selection process. Within each category, we analyze the relationships among these methods and compare their strengths and limitations. Finally, we outline prospective future directions for few-shot learning on graphs to catalyze continued innovation in this field.
翻译:图表示学习作为图核心任务中的关键步骤,已取得显著进展。早期方法通常采用端到端模式,其性能高度依赖于充足标注数据的可用性。这一限制催生了图上的少样本学习,其中每个任务仅有少量任务特定标签可用。鉴于该领域文献数量庞大,本综述致力于整合近期进展、提供对比性见解并指明未来方向。我们系统地梳理现有研究为三大主要类别:元学习方法、预训练方法及混合方法,并在每类中进一步细化分类以辅助读者进行方法选择。针对每个类别,我们分析了各类方法间的关联性,比较其优势与局限性。最后,我们概述了图上少样本学习的潜在未来方向,以推动该领域的持续创新。