Weak supervision searches have in principle the advantages of both being able to train on experimental data and being able to learn distinctive signal properties. However, the practical applicability of such searches is limited by the fact that successfully training a neural network via weak supervision can require a large amount of signal. In this work, we seek to create neural networks that can learn from less experimental signal by using transfer and meta-learning. The general idea is to first train a neural network on simulations, thereby learning concepts that can be reused or becoming a more efficient learner. The neural network would then be trained on experimental data and should require less signal because of its previous training. We find that transfer and meta-learning can substantially improve the performance of weak supervision searches.
翻译:弱监督搜索原则上兼具利用实验数据训练和识别独特信号特性的优势。然而,此类搜索的实际应用受到限制,因为通过弱监督成功训练神经网络可能需要大量信号。在本研究中,我们试图通过迁移学习和元学习,创建能够从较少实验信号中学习的神经网络。总体思路是首先在模拟数据上训练神经网络,从而学习可重复利用的概念或成为更高效的学习者。随后,该神经网络将在实验数据上训练,并因其前期学习而需要更少的信号。我们发现,迁移学习和元学习可以显著提升弱监督搜索的性能。