Mutual Information (MI) is a powerful statistical measure that quantifies shared information between random variables, particularly valuable in high-dimensional data analysis across fields like genomics, natural language processing, and network science. However, computing MI becomes computationally prohibitive for large datasets where it is typically required a pairwise computational approach where each column is compared to others. This work introduces a matrix-based algorithm that accelerates MI computation by leveraging vectorized operations and optimized matrix calculations. By transforming traditional pairwise computational approaches into bulk matrix operations, the proposed method enables efficient MI calculation across all variable pairs. Experimental results demonstrate significant performance improvements, with computation times reduced up to 50,000 times in the largest dataset using optimized implementations, particularly when utilizing hardware optimized frameworks. The approach promises to expand MI's applicability in data-driven research by overcoming previous computational limitations.
翻译:互信息是一种强大的统计度量,用于量化随机变量间的共享信息,在基因组学、自然语言处理和网络科学等领域的高维数据分析中具有重要价值。然而,对于需要逐对计算(即每列与其他所有列进行比较)的大规模数据集,互信息计算会变得计算成本过高。本研究提出一种基于矩阵的算法,通过利用向量化操作和优化的矩阵计算来加速互信息计算。该方法将传统的逐对计算方式转化为批量矩阵运算,从而实现对所有变量对的高效互信息计算。实验结果表明,在使用硬件优化框架时,该方法能显著提升计算性能——在最大数据集上采用优化实现后,计算时间最高可缩短50,000倍。该方法有望通过突破以往的计算限制,拓展互信息在数据驱动研究中的应用范围。