Neural network architectures designed for function parameterization, such as the Bag-of-Functions (BoF) framework, bridge the gap between the expressivity of deep learning and the interpretability of classical signal processing. However, these models are inherently sensitive to parameter initialization, as traditional data-agnostic schemes fail to capture the structural properties of the target signals, often leading to suboptimal convergence. In this work, we propose a prior-informed design strategy that leverages the intrinsic spectral and temporal structure of the data to guide both network initialization and architectural configuration. A principled methodology is introduced that uses the Fast Fourier Transform to extract dominant seasonal priors, informing model depth and initial states, and a residual-based regression approach to parameterize trend components. Crucially, this structural alignment enables a substantial reduction in encoder dimensionality without compromising reconstruction fidelity. A supporting theoretical analysis provides guidance on trend estimation under finite-sample regimes. Extensive experiments on synthetic and real-world benchmarks demonstrate that embedding data-driven priors significantly accelerates convergence, reduces performance variability across trials, and improves computational efficiency. Overall, the proposed framework enables more compact and interpretable architectures while outperforming standard initialization baselines, without altering the core training procedure.
翻译:专为函数参数化设计的神经网络架构,例如函数袋(BoF)框架,弥合了深度学习表达能力与经典信号处理可解释性之间的鸿沟。然而,这些模型本质上对参数初始化敏感,因为传统的数据无关初始化方案无法捕捉目标信号的结构特性,常常导致次优收敛。本文提出一种基于先验信息的设计策略,该策略利用数据固有的谱结构和时间结构来指导网络初始化与架构配置。我们引入了一种原理性方法:使用快速傅里叶变换提取主导的季节性先验,以指导模型深度和初始状态的设定,并采用基于残差的回归方法来参数化趋势分量。至关重要的是,这种结构对齐能够在保持重建保真度的前提下,显著降低编码器的维度。一项支持性的理论分析为有限样本情况下的趋势估计提供了指导。在合成与真实世界基准数据集上进行的大量实验表明,嵌入数据驱动的先验信息能显著加速收敛、降低多次实验间的性能波动,并提高计算效率。总体而言,所提出的框架在不改变核心训练流程的前提下,实现了比标准初始化基线更优的性能,同时支持构建更紧凑、更可解释的架构。