Applications of Implicit Neural Representations (INRs) have emerged as a promising deep learning approach for compactly representing large volumetric datasets. These models can act as surrogates for volume data, enabling efficient storage and on-demand reconstruction via model predictions. However, conventional deterministic INRs only provide value predictions without insights into the model's prediction uncertainty or the impact of inherent noisiness in the data. This limitation can lead to unreliable data interpretation and visualization due to prediction inaccuracies in the reconstructed volume. Identifying erroneous results extracted from model-predicted data may be infeasible, as raw data may be unavailable due to its large size. To address this challenge, we introduce REV-INR, Regularized Evidential Implicit Neural Representation, which learns to predict data values accurately along with the associated coordinate-level data uncertainty and model uncertainty using only a single forward pass of the trained REV-INR during inference. By comprehensively comparing and contrasting REV-INR with existing well-established deep uncertainty estimation methods, we show that REV-INR achieves the best volume reconstruction quality with robust data (aleatoric) and model (epistemic) uncertainty estimates using the fastest inference time. Consequently, we demonstrate that REV-INR facilitates assessment of the reliability and trustworthiness of the extracted isosurfaces and volume visualization results, enabling analyses to be solely driven by model-predicted data.
翻译:隐式神经表示(INRs)作为一种用于紧凑表示大规模体数据集的深度学习方法,其应用已展现出广阔前景。这些模型可作为体数据的替代表示,通过模型预测实现高效存储和按需重建。然而,传统的确定性INR仅提供数值预测,无法反映模型预测的不确定性或数据固有噪声的影响。这一局限可能导致重建体数据中的预测误差,进而引发不可靠的数据解释与可视化。由于原始数据可能因体量过大而无法获取,从模型预测数据中识别错误结果往往难以实现。为应对这一挑战,我们提出了REV-INR(正则化证据隐式神经表示),该模型在推理过程中仅需单次前向传播,即可同时学习准确预测数据值以及相关联的坐标级数据不确定性与模型不确定性。通过将REV-INR与现有成熟的深度不确定性估计方法进行全面对比分析,我们证明REV-INR能以最快的推理速度,在实现最佳体数据重建质量的同时,提供稳健的数据(偶然)不确定性与模型(认知)不确定性估计。基于此,我们验证了REV-INR能够有效评估提取的等值面与体数据可视化结果的可靠性与可信度,从而支持完全基于模型预测数据的分析流程。