Floating-point neural networks dominate modern machine learning but incur substantial inference cost, motivating interest in Boolean networks for resource-constrained settings. However, learning compact and accurate Boolean networks is challenging due to their combinatorial nature. In this work, we address this challenge from three different angles: learned connections, compact convolutions and adaptive discretization. First, we propose a novel strategy to learn efficient connections with no additional parameters and negligible computational overhead. Second, we introduce a novel convolutional Boolean architecture that exploits the locality with reduced number of Boolean operations than existing methods. Third, we propose an adaptive discretization strategy to reduce the accuracy drop when converting a continuous-valued network into a Boolean one. Extensive results on standard vision benchmarks demonstrate that the Pareto front of accuracy vs. computation of our method significantly outperforms prior state-of-the-art, achieving better accuracy with up to 37x fewer Boolean operations.
翻译:浮点神经网络主导着现代机器学习,但其推理成本高昂,这促使人们关注资源受限场景下的布尔网络。然而,由于布尔网络的组合性质,学习紧凑且准确的布尔网络具有挑战性。在本工作中,我们从三个不同角度应对这一挑战:学习连接、紧凑卷积和自适应离散化。首先,我们提出一种新颖的策略来学习高效的连接,该策略无需额外参数且计算开销可忽略不计。其次,我们引入一种新颖的卷积布尔架构,它利用局部性,与现有方法相比减少了布尔运算的数量。第三,我们提出一种自适应离散化策略,以减少将连续值网络转换为布尔网络时的精度损失。在标准视觉基准测试上的大量结果表明,我们方法在精度与计算量之间的帕累托前沿显著优于先前的最先进方法,在布尔运算量减少高达37倍的同时实现了更好的精度。