Categorical deep learning (CDL) has recently emerged as a framework that leverages category theory to unify diverse neural architectures. While geometric deep learning (GDL) is grounded in the specific context of invariants of group actions, CDL aims to provide domain-independent abstractions for reasoning about models and their properties. In this paper, we contribute to this program by developing a coalgebraic foundation for equivariant representation in deep learning, as classical notions of group actions and equivariant maps are naturally generalized by the coalgebraic formalism. Our first main result demonstrates that, given an embedding of data sets formalized as a functor from SET to VECT, and given a notion of invariant behavior on data sets modeled by an endofunctor on SET, there is a corresponding endofunctor on VECT that is compatible with the embedding in the sense that this lifted functor recovers the analogous notion of invariant behavior on the embedded data. Building on this foundation, we then establish a universal approximation theorem for equivariant maps in this generalized setting. We show that continuous equivariant functions can be approximated within our coalgebraic framework for a broad class of symmetries. This work thus provides a categorical bridge between the abstract specification of invariant behavior and its concrete realization in neural architectures.
翻译:范畴深度学习(CDL)是近期兴起的一种框架,其利用范畴论来统一多种神经架构。几何深度学习(GDL)植根于群作用不变量的特定背景,而CDL旨在为模型及其属性的推理提供领域无关的抽象。本文通过为深度学习中的等变表示建立余代数基础来推进这一研究计划,因为群作用和等变映射的经典概念可通过余代数形式自然地推广。我们的第一个主要结果表明:给定一个将数据集形式化为从SET到VECT函子的嵌入,以及一个由SET上的自函子建模的数据集不变行为概念,则存在VECT上一个对应的自函子,该函子与嵌入兼容,其意义在于这个提升的函子恢复了嵌入数据上类似的不变行为概念。基于此基础,我们随后在此广义设定下建立了等变映射的通用逼近定理。我们证明,对于一大类对称性,连续等变函数可以在我们的余代数框架内被逼近。因此,这项工作在不变量行为的抽象规范与其在神经架构中的具体实现之间架起了一座范畴论的桥梁。