Few-shot learning requires models to generalize under limited supervision while remaining robust to distribution shifts. Existing Sinkhorn Distributionally Robust Optimization (DRO) methods provide theoretical guarantees but rely on a fixed reference distribution, which limits their adaptability. We propose a Prototype-Guided Distributionally Robust Optimization (PG-DRO) framework that learns class-adaptive priors from abundant base data via hierarchical optimal transport and embeds them into the Sinkhorn DRO formulation. This design enables few-shot information to be organically integrated into producing class-specific robust decisions that are both theoretically grounded and efficient, and further aligns the uncertainty set with transferable structural knowledge. Experiments show that PG-DRO achieves stronger robust generalization in few-shot scenarios, outperforming both standard learners and DRO baselines.
翻译:少样本学习要求模型在有限监督下实现泛化,同时保持对分布偏移的鲁棒性。现有的Sinkhorn分布鲁棒优化方法虽能提供理论保证,但依赖于固定的参考分布,限制了其适应性。我们提出一种原型引导的分布鲁棒优化框架,该框架通过分层最优传输从丰富的基类数据中学习类别自适应先验,并将其嵌入Sinkhorn DRO公式。这一设计使得少样本信息能够有机地整合到产生类别特异性鲁棒决策的过程中,这些决策既具有理论依据又计算高效,并进一步将不确定性集合与可迁移的结构知识对齐。实验表明,PG-DRO在少样本场景中实现了更强的鲁棒泛化能力,其性能优于标准学习器及现有DRO基线方法。