In next-generation communications and networks, machine learning (ML) models are expected to deliver not only accurate predictions but also well-calibrated confidence scores that reflect the true likelihood of correct decisions. This paper studies the calibration performance of an ML-based outage predictor within a single-user, multi-resource allocation framework. We first establish key theoretical properties of this system's outage probability (OP) under perfect calibration. Importantly, we show that as the number of resources grows, the OP of a perfectly calibrated predictor approaches the expected output conditioned on it being below the classification threshold. In contrast, when only one resource is available, the system's OP equals the model's overall expected output. We then derive the OP conditions for a perfectly calibrated predictor. These findings guide the choice of the classification threshold to achieve a desired OP, helping system designers meet specific reliability requirements. We also demonstrate that post-processing calibration cannot improve the system's minimum achievable OP, as it does not introduce new information about future channel states. Additionally, we show that well-calibrated models are part of a broader class of predictors that necessarily improve OP. In particular, we establish a monotonicity condition that the accuracy-confidence function must satisfy for such improvement to occur. To demonstrate these theoretical properties, we conduct a rigorous simulation-based analysis using post-processing calibration techniques: Platt scaling and isotonic regression. As part of this framework, the predictor is trained using an outage loss function specifically designed for this system. Furthermore, this analysis is performed on Rayleigh fading channels with temporal correlation captured by Clarke's 2D model, which accounts for receiver mobility.
翻译:在下一代通信与网络中,机器学习模型不仅需要提供准确的预测,还应输出经过良好校准的置信度分数,以反映决策正确的真实概率。本文研究了单用户多资源分配框架下基于机器学习的中断预测器的校准性能。首先,我们建立了在完美校准条件下该系统中断概率的关键理论性质。重要的是,我们证明当资源数量增加时,完美校准预测器的中断概率趋近于其低于分类阈值条件下的期望输出。相反,当仅有一个可用资源时,系统的中断概率等于模型的总体期望输出。随后,我们推导出完美校准预测器实现中断概率的条件。这些发现为选择分类阈值以达到目标中断概率提供了指导,帮助系统设计者满足特定的可靠性要求。我们还证明后处理校准无法改善系统可实现的最小中断概率,因为它并未引入关于未来信道状态的新信息。此外,我们表明经过良好校准的模型属于一个更广泛的预测器类别,这类预测器必然能改善中断概率。特别地,我们建立了准确率-置信度函数必须满足的单调性条件,以确保这种改善发生。为验证这些理论性质,我们采用后处理校准技术——普拉特缩放和等渗回归——进行了严格的仿真分析。在该框架中,预测器使用专门为此系统设计的中断损失函数进行训练。此外,该分析在考虑接收机移动性的克拉克二维模型所描述的时间相关性瑞利衰落信道上进行。