Dynamic feature selection (DFS) addresses budget constraints in decision-making by sequentially acquiring features for each instance, making it appealing for resource-limited scenarios. However, existing DFS methods require models specifically designed for the sequential acquisition setting, limiting compatibility with models already deployed in practice. Furthermore, they provide limited uncertainty quantification, undermining trust in high-stakes decisions. In this work, we show that DFS introduces new uncertainty sources compared to the static setting. We formalise how model adaptation to feature subsets induces epistemic uncertainty, how standard imputation strategies bias aleatoric uncertainty estimation, and why predictive confidence fails to discriminate between good and bad selection policies. We also propose a model-agnostic DFS framework compatible with pre-trained classifiers, including interpretable-by-design models, through efficient subset reparametrization strategies. Empirical evaluation on tabular and image datasets demonstrates competitive accuracy against state-of-the-art greedy and reinforcement learning-based DFS methods with both neural and rule-based classifiers. We further show that the identified uncertainty sources persist across most existing approaches, highlighting the need for uncertainty-aware DFS.
翻译:动态特征选择(DFS)通过为每个实例顺序获取特征来解决决策中的预算约束问题,使其在资源受限的场景中颇具吸引力。然而,现有的DFS方法需要专门为顺序获取设置设计的模型,这限制了其与实践中已部署模型的兼容性。此外,这些方法提供的不确定性量化有限,削弱了在高风险决策中的可信度。在本工作中,我们证明了与静态设置相比,DFS引入了新的不确定性来源。我们形式化地阐述了模型对特征子集的适应如何引发认知不确定性,标准插补策略如何偏置偶然不确定性估计,以及为何预测置信度无法区分优劣选择策略。我们还提出了一种模型无关的DFS框架,该框架通过高效的子集重参数化策略,与预训练分类器(包括可解释设计模型)兼容。在表格和图像数据集上的实证评估表明,该方法在使用神经网络和基于规则的分类器时,相较于最先进的贪婪法和基于强化学习的DFS方法,具有竞争力的准确率。我们进一步表明,所识别的不确定性来源在大多数现有方法中持续存在,这凸显了对不确定性感知DFS的需求。