CoVariance Neural Networks (VNNs) perform convolutions on the graph determined by the covariance matrix of the data, which enables expressive and stable covariance-based learning. However, covariance matrices are typically dense, fail to encode conditional independence, and are often precomputed in a task-agnostic way, which may hinder performance. To overcome these limitations, we study Precision Neural Networks (PNNs), i.e., VNNs on the precision matrix - the inverse covariance. The precision matrix naturally encodes statistical independence, often exhibits sparsity, and preserves the covariance spectral structure. To make precision estimation task-aware, we formulate an optimization problem that jointly learns the network parameters and the precision matrix, and solve it via alternating optimization, by sequentially updating the network weights and the precision estimate. We theoretically bound the distance between the estimated and true precision matrices at each iteration, and demonstrate the effectiveness of joint estimation compared to two-step approaches on synthetic and real-world data.
翻译:协方差神经网络(VNNs)在由数据协方差矩阵确定的图上执行卷积运算,这实现了基于协方差的表达能力强大且稳定的学习。然而,协方差矩阵通常是稠密的,无法编码条件独立性,且常以任务无关的方式预先计算,这可能限制性能。为克服这些局限,我们研究精度神经网络(PNNs),即在精度矩阵(协方差矩阵的逆)上操作的VNNs。精度矩阵天然编码统计独立性,常呈现稀疏性,并保持协方差谱结构。为使精度估计具有任务感知性,我们构建了一个联合学习网络参数与精度矩阵的优化问题,并通过交替优化求解——依次更新网络权重与精度估计值。我们从理论上界定了每次迭代中估计精度矩阵与真实精度矩阵之间的距离,并在合成数据与真实数据上验证了联合估计相比两步方法的有效性。