According to the Hughes phenomenon, the major challenges encountered in computations with learning models comes from the scale of complexity, e.g. the so-called curse of dimensionality. There are various approaches for accelerate learning computations with minimal loss of accuracy. These approaches range from model-level to implementation-level approaches. To the best of our knowledge, the first one is rarely used in its basic form. Perhaps, this is due to theoretical understanding of mathematical insights of model decomposition approaches, and thus the ability of developing mathematical improvements has lagged behind. We describe a model-level decomposition approach that combines both the decomposition of the operators and the decomposition of the network. We perform a feasibility analysis on the resulting algorithm, both in terms of its accuracy and scalability.
翻译:根据休斯现象,学习模型计算中遇到的主要挑战源于复杂性规模,例如所谓的维度灾难。存在多种方法可在最小精度损失下加速学习计算,这些方法涵盖从模型层面到实现层面的不同策略。据我们所知,基础形式的模型层面方法鲜少被采用。这或许源于对模型分解方法数学原理的理论理解不足,导致数学改进能力的开发相对滞后。本文描述了一种模型层面的分解方法,该方法同时结合了算子分解与网络分解。我们对所得算法进行了可行性分析,涵盖其准确性与可扩展性两个维度。