Modern neural networks rely on generic activation functions (ReLU, GELU, SiLU) that ignore the mathematical structure inherent in scientific data. We propose Neuro-Symbolic Activation Discovery, a framework that uses Genetic Programming to extract interpretable mathematical formulas from data and inject them as custom activation functions. Our key contribution is the discovery of a Geometric Transfer phenomenon: activation functions learned from particle physics data successfully generalize to ecological classification, outperforming standard activations (ReLU, GELU, SiLU) in both accuracy and parameter efficiency. On the Forest Cover dataset, our Hybrid Transfer model achieves 82.4% accuracy with only 5,825 parameters, compared to 83.4% accuracy requiring 31,801 parameters for a conventional heavy network -- a 5.5x parameter reduction with only 1% accuracy loss. We introduce a Parameter Efficiency Score ($E_{param} = AUC / \log_{10}(Params)$) and demonstrate that lightweight hybrid architectures consistently achieve 18-21% higher efficiency than over-parameterized baselines. Crucially, we establish boundary conditions: while Physics to Ecology transfer succeeds (both involve continuous Euclidean measurements), Physics to Text transfer fails (discrete word frequencies require different mathematical structures). Our work opens pathways toward domain-specific activation libraries for efficient scientific machine learning.
翻译:现代神经网络依赖于通用的激活函数(ReLU、GELU、SiLU),这些函数忽略了科学数据中固有的数学结构。我们提出神经符号激活发现框架,该框架利用遗传编程从数据中提取可解释的数学公式,并将其作为自定义激活函数注入网络。我们的核心贡献是发现了一种几何迁移现象:从粒子物理学数据中学习到的激活函数能够成功泛化至生态学分类任务,在准确率和参数效率上均优于标准激活函数(ReLU、GELU、SiLU)。在森林覆盖数据集上,我们的混合迁移模型仅使用5,825个参数即达到82.4%的准确率,而传统复杂网络需要31,801个参数才能达到83.4%的准确率——在仅损失1%准确率的情况下实现了5.5倍的参数压缩。我们引入了参数效率评分($E_{param} = AUC / \log_{10}(Params)$),并证明轻量级混合架构持续比过参数化基线获得18-21%的效率提升。关键的是,我们确立了边界条件:虽然从物理学到生态学的迁移能够成功(两者均涉及连续欧几里得度量),但从物理学到文本的迁移却会失败(离散词频需要不同的数学结构)。我们的工作为构建面向高效科学机器学习领域专用激活函数库开辟了道路。