We develop a flexible feature selection framework based on deep neural networks that approximately controls the false discovery rate (FDR), a measure of Type-I error. The method applies to architectures whose first layer is fully connected. From the second layer onward, it accommodates multilayer perceptrons (MLPs) of arbitrary width and depth, convolutional and recurrent networks, attention mechanisms, residual connections, and dropout. The procedure also accommodates stochastic gradient descent with data-independent initializations and learning rates. To the best of our knowledge, this is the first work to provide a theoretical guarantee of FDR control for feature selection within such a general deep learning setting. Our analysis is built upon a multi-index data-generating model and an asymptotic regime in which the feature dimension $n$ diverges faster than the latent dimension $q^{*}$, while the sample size, the number of training iterations, the network depth, and hidden layer widths are left unrestricted. Under this setting, we show that each coordinate of the gradient-based feature-importance vector admits a marginal normal approximation, thereby supporting the validity of asymptotic FDR control. As a theoretical limitation, we assume $\mathbf{B}$-right orthogonal invariance of the design matrix, and we discuss broader generalizations. We also present numerical experiments that underscore the theoretical findings.
翻译:我们提出了一种基于深度神经网络的灵活特征选择框架,该框架能够近似控制假发现率——一种I类错误的度量指标。该方法适用于第一层为全连接层的网络架构。从第二层开始,可容纳任意宽度与深度的多层感知机、卷积与循环网络、注意力机制、残差连接及随机失活技术。该流程同时兼容采用数据无关初始化与学习率的随机梯度下降算法。据我们所知,这是首个在此类通用深度学习场景下为特征选择提供FDR控制理论保证的研究。我们的分析建立于多索引数据生成模型及渐近体系之上:当特征维度$n$的扩张速度超过潜在维度$q^{*}$时,样本量、训练迭代次数、网络深度及隐藏层宽度均不受限制。在此设定下,我们证明了基于梯度的特征重要性向量的每个坐标均服从边际正态近似,从而支撑了渐近FDR控制的有效性。作为理论局限,我们假设设计矩阵具有$\mathbf{B}$右正交不变性,并探讨了更广泛的推广可能。数值实验进一步验证了理论结论。