Learning a continuous and reliable representation of physical fields from sparse sampling is challenging and it affects diverse scientific disciplines. In a recent work, we present a novel model called MMGN (Multiplicative and Modulated Gabor Network) with implicit neural networks. In this work, we design additional studies leveraging explainability methods to complement the previous experiments and further enhance the understanding of latent representations generated by the model. The adopted methods are general enough to be leveraged for any latent space inspection. Preliminary results demonstrate the contextual information incorporated in the latent representations and their impact on the model performance. As a work in progress, we will continue to verify our findings and develop novel explainability approaches.
翻译:从稀疏采样中学习物理场的连续且可靠表示具有挑战性,这影响了多个科学学科。在近期工作中,我们提出了一种名为MMGN(乘性与调制Gabor网络)的隐式神经网络新型模型。本研究设计并运用可解释性方法进行补充实验,以进一步加深对模型生成的潜在表示的理解。所采用的方法具有通用性,可用于任何潜在空间分析。初步结果表明,潜在表示中包含的上下文信息及其对模型性能的影响。作为一项进行中的工作,我们将持续验证研究发现并开发新型可解释性方法。