An established measure of the expressive power of a given ReLU neural network is the number of linear regions into which it partitions the input space. There exist many different, non-equivalent definitions of what a linear region actually is. We systematically assess which papers use which definitions and discuss how they relate to each other. We then analyze the computational complexity of counting the number of such regions for the various definitions. Generally, this turns out to be an intractable problem. We prove NP- and #P-hardness results already for networks with one hidden layer and strong hardness of approximation results for two or more hidden layers. Finally, on the algorithmic side, we demonstrate that counting linear regions can at least be achieved in polynomial space for some common definitions.
翻译:ReLU神经网络表达能力的一个既定度量指标是其在输入空间中划分出的线性区域数量。对于线性区域的实际定义,存在多种不同且不等价的方式。我们系统性地评估了哪些文献采用了何种定义,并讨论了这些定义之间的关联。随后,我们针对不同定义下线性区域数量的计算复杂性进行了分析。总体而言,该问题被证明是难解的。我们证明了即使对于仅含一个隐藏层的网络,该问题已是NP难与#P难的,而对于两个或更多隐藏层的网络,则进一步给出了强近似难度结果。最后,在算法层面,我们证明了针对某些常见定义,线性区域的计数至少可以在多项式空间内完成。