We introduce a new framework for studying meta-learning methods using PAC-Bayesian theory. Its main advantage over previous work is that it allows for more flexibility in how the transfer of knowledge between tasks is realized. For previous approaches, this could only happen indirectly, by means of learning prior distributions over models. In contrast, the new generalization bounds that we prove express the process of meta-learning much more directly as learning the learning algorithm that should be used for future tasks. The flexibility of our framework makes it suitable to analyze a wide range of meta-learning mechanisms and even design new mechanisms. Other than our theoretical contributions we also show empirically that our framework improves the prediction quality in practical meta-learning mechanisms.
翻译:我们引入了一个利用PAC贝叶斯理论研究元学习方法的新框架。相较于先前工作,其主要优势在于能够更灵活地实现任务间的知识迁移。在以往方法中,这种迁移只能通过间接方式实现,即通过学习模型上的先验分布。相比之下,我们证明的新泛化边界更直接地将元学习过程表达为学习应用于未来任务的学习算法。本框架的灵活性使其适用于分析多种元学习机制,甚至能用于设计新机制。除理论贡献外,我们还通过实验证明该框架能够提升实际元学习机制中的预测质量。