The widespread use of Deep Neural Networks (DNNs) has recently resulted in their application to challenging scientific visualization tasks. While advanced DNNs demonstrate impressive generalization abilities, understanding factors like prediction quality, confidence, robustness, and uncertainty is crucial. These insights aid application scientists in making informed decisions. However, DNNs lack inherent mechanisms to measure prediction uncertainty, prompting the creation of distinct frameworks for constructing robust uncertainty-aware models tailored to various visualization tasks. In this work, we develop uncertainty-aware implicit neural representations to model steady-state vector fields effectively. We comprehensively evaluate the efficacy of two principled deep uncertainty estimation techniques: (1) Deep Ensemble and (2) Monte Carlo Dropout, aimed at enabling uncertainty-informed visual analysis of features within steady vector field data. Our detailed exploration using several vector data sets indicate that uncertainty-aware models generate informative visualization results of vector field features. Furthermore, incorporating prediction uncertainty improves the resilience and interpretability of our DNN model, rendering it applicable for the analysis of non-trivial vector field data sets.
翻译:深度神经网络(DNNs)的广泛应用,近来已促使其被应用于具有挑战性的科学可视化任务。虽然先进的DNNs展现出令人印象深刻的泛化能力,但理解预测质量、置信度、鲁棒性和不确定性等因素至关重要。这些洞见有助于应用科学家做出明智的决策。然而,DNNs缺乏固有的机制来衡量预测的不确定性,这促使了针对不同可视化任务构建鲁棒的不确定性感知模型的独立框架的创建。在本工作中,我们开发了不确定性感知的隐式神经表示,以有效建模稳态矢量场。我们全面评估了两种基于原理的深度不确定性估计技术的效能:(1)深度集成和(2)蒙特卡洛Dropout,旨在实现对稳态矢量场数据内部特征进行不确定性信息化的可视化分析。我们使用多个矢量数据集进行的详细探索表明,不确定性感知模型能够生成信息丰富的矢量场特征可视化结果。此外,纳入预测不确定性提高了我们DNN模型的鲁棒性和可解释性,使其适用于分析非平凡的矢量场数据集。