A central challenge in analyzing multivariate interactions within complex systems is to decompose how multiple inputs jointly determine an output. Existing approaches generally operate on observed probability distributions and can conflate a system's intrinsic functional logic with statistical artifacts of limited data. As a result, distinct systems can yield identical observations, rendering information decomposition fundamentally underdetermined and obscuring true higher-order interactions. We introduce Functional Information Decomposition (FID), both a computational and theoretical framework, which defines informational components with respect to a system's complete input-output mapping, thereby addressing a core cross-scale inference problem: determining how information carried by individual components combines to shape system-level behavior. When the mapping is fully specified, FID provides a unique decomposition into independent and synergistic contributions. Crucially, given only partial observations, FID characterizes the entire space of consistent decompositions by sampling compatible functions, making inferential limits explicit. A complementary geometric perspective clarifies the structural origin of informational components. We demonstrate FID's interdisciplinary utility on canonical logical functions, Conway's Game of Life, and gene-expression-based prediction of cancer drug response, and provide an open-source implementation. By separating functional architecture from observational distribution, FID offers a principled foundation for analyzing multivariate dependence in both fully and partially observed complex systems.
翻译:暂无翻译