Deep neural networks (DNN) are increasingly being used to perform algorithm-selection in combinatorial optimisation domains, particularly as they accommodate input representations which avoid designing and calculating features. Mounting evidence from domains that use images as input shows that deep convolutional networks are vulnerable to adversarial samples, in which a small perturbation of an instance can cause the DNN to misclassify. However, it remains unknown as to whether deep recurrent networks (DRN) which have recently been shown promise as algorithm-selectors in the bin-packing domain are equally vulnerable. We use an evolutionary algorithm (EA) to find perturbations of instances from two existing benchmarks for online bin packing that cause trained DRNs to misclassify: adversarial samples are successfully generated from up to 56% of the original instances depending on the dataset. Analysis of the new misclassified instances sheds light on the `fragility' of some training instances, i.e. instances where it is trivial to find a small perturbation that results in a misclassification and the factors that influence this. Finally, the method generates a large number of new instances misclassified with a wide variation in confidence, providing a rich new source of training data to create more robust models.
翻译:深度神经网络(DNN)在组合优化领域中越来越多地被用于执行算法选择,特别是因为它们能够适应输入表示,从而避免设计和计算特征。来自使用图像作为输入的领域的越来越多的证据表明,深度卷积网络容易受到对抗性样本的攻击,即实例的微小扰动可能导致DNN错误分类。然而,对于最近在装箱问题领域显示出作为算法选择器潜力的深度循环网络(DRN)是否同样脆弱,目前尚不清楚。我们使用进化算法(EA)对来自两个现有在线装箱基准测试的实例进行扰动,导致训练好的DRN错误分类:根据数据集的不同,成功从高达56%的原始实例中生成了对抗性样本。对这些新错误分类实例的分析揭示了某些训练实例的“脆弱性”,即那些很容易找到微小扰动导致错误分类的实例,以及影响这种脆弱性的因素。最后,该方法生成了大量具有广泛置信度变化的新错误分类实例,为创建更鲁棒的模型提供了丰富的新训练数据来源。